Test Report: Docker_Linux_crio_arm64 22179

                    
                      505b1c9a8fd96db2c5d776a2dde7c3c6efd2d048:2025-12-22:42914
                    
                

Test fail (55/416)

Order failed test Duration
38 TestAddons/serial/Volcano 0.38
44 TestAddons/parallel/Registry 16.2
45 TestAddons/parallel/RegistryCreds 0.5
46 TestAddons/parallel/Ingress 142.73
47 TestAddons/parallel/InspektorGadget 6.32
48 TestAddons/parallel/MetricsServer 6.36
50 TestAddons/parallel/CSI 34.63
51 TestAddons/parallel/Headlamp 3.13
52 TestAddons/parallel/CloudSpanner 5.35
53 TestAddons/parallel/LocalPath 8.37
54 TestAddons/parallel/NvidiaDevicePlugin 6.27
55 TestAddons/parallel/Yakd 6.29
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 501.45
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 368.78
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 2.36
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 2.49
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 2.36
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 735.41
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 2.13
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 1.78
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 2.99
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 2.29
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 241.63
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 3.03
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.08
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.31
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.33
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.31
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.36
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.45
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 0.12
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 119.4
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 2.17
293 TestJSONOutput/pause/Command 1.72
299 TestJSONOutput/unpause/Command 1.95
358 TestKubernetesUpgrade 790.68
363 TestPause/serial/Pause 9.03
402 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.48
407 TestStartStop/group/no-preload/serial/FirstStart 515.29
411 TestStartStop/group/old-k8s-version/serial/Pause 5.99
415 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.52
422 TestStartStop/group/embed-certs/serial/Pause 6.18
426 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.41
433 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.28
435 TestStartStop/group/newest-cni/serial/FirstStart 501.1
436 TestStartStop/group/no-preload/serial/DeployApp 2.98
437 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 116.48
440 TestStartStop/group/no-preload/serial/SecondStart 370.01
442 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 102.85
445 TestStartStop/group/newest-cni/serial/SecondStart 374.74
446 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.48
450 TestStartStop/group/newest-cni/serial/Pause 9.49
467 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 253.04
x
+
TestAddons/serial/Volcano (0.38s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable volcano --alsologtostderr -v=1: exit status 11 (377.384864ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:11:24.917694 1888221 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:11:24.920224 1888221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:24.920277 1888221 out.go:374] Setting ErrFile to fd 2...
	I1222 00:11:24.920298 1888221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:24.920621 1888221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:11:24.920966 1888221 mustload.go:66] Loading cluster: addons-568394
	I1222 00:11:24.921463 1888221 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:24.921510 1888221 addons.go:622] checking whether the cluster is paused
	I1222 00:11:24.921661 1888221 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:24.921692 1888221 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:11:24.922244 1888221 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:11:24.955687 1888221 ssh_runner.go:195] Run: systemctl --version
	I1222 00:11:24.955751 1888221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:11:24.972490 1888221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:11:25.104472 1888221 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:11:25.104568 1888221 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:11:25.141090 1888221 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:11:25.141116 1888221 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:11:25.141121 1888221 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:11:25.141125 1888221 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:11:25.141129 1888221 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:11:25.141133 1888221 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:11:25.141136 1888221 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:11:25.141140 1888221 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:11:25.141143 1888221 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:11:25.141153 1888221 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:11:25.141157 1888221 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:11:25.141161 1888221 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:11:25.141165 1888221 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:11:25.141169 1888221 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:11:25.141172 1888221 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:11:25.141180 1888221 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:11:25.141184 1888221 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:11:25.141189 1888221 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:11:25.141192 1888221 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:11:25.141195 1888221 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:11:25.141199 1888221 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:11:25.141207 1888221 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:11:25.141211 1888221 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:11:25.141214 1888221 cri.go:96] found id: ""
	I1222 00:11:25.141269 1888221 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:11:25.169636 1888221 out.go:203] 
	W1222 00:11:25.172603 1888221 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:11:25.172647 1888221 out.go:285] * 
	* 
	W1222 00:11:25.205006 1888221 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:11:25.208060 1888221 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.38s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 13.272805ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003502402s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002856744s
addons_test.go:394: (dbg) Run:  kubectl --context addons-568394 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-568394 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-568394 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.653641083s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 ip
2025/12/22 00:11:51 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable registry --alsologtostderr -v=1: exit status 11 (269.616411ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:11:51.562735 1889316 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:11:51.563502 1889316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:51.563622 1889316 out.go:374] Setting ErrFile to fd 2...
	I1222 00:11:51.563644 1889316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:51.563916 1889316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:11:51.564249 1889316 mustload.go:66] Loading cluster: addons-568394
	I1222 00:11:51.564702 1889316 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:51.564745 1889316 addons.go:622] checking whether the cluster is paused
	I1222 00:11:51.564885 1889316 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:51.564914 1889316 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:11:51.565550 1889316 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:11:51.586323 1889316 ssh_runner.go:195] Run: systemctl --version
	I1222 00:11:51.586634 1889316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:11:51.603691 1889316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:11:51.700157 1889316 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:11:51.700270 1889316 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:11:51.731845 1889316 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:11:51.731867 1889316 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:11:51.731874 1889316 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:11:51.731877 1889316 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:11:51.731883 1889316 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:11:51.731893 1889316 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:11:51.731897 1889316 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:11:51.731900 1889316 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:11:51.731903 1889316 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:11:51.731909 1889316 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:11:51.731916 1889316 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:11:51.731919 1889316 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:11:51.731921 1889316 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:11:51.731924 1889316 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:11:51.731928 1889316 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:11:51.731936 1889316 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:11:51.731940 1889316 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:11:51.731944 1889316 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:11:51.731947 1889316 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:11:51.731950 1889316 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:11:51.731954 1889316 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:11:51.731957 1889316 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:11:51.731961 1889316 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:11:51.731964 1889316 cri.go:96] found id: ""
	I1222 00:11:51.732016 1889316 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:11:51.748947 1889316 out.go:203] 
	W1222 00:11:51.751932 1889316 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:11:51.751952 1889316 out.go:285] * 
	* 
	W1222 00:11:51.769992 1889316 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:11:51.773082 1889316 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.20s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.743046ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-568394
addons_test.go:334: (dbg) Run:  kubectl --context addons-568394 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (277.85954ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:12:13.615279 1890099 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:12:13.616771 1890099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:13.616808 1890099 out.go:374] Setting ErrFile to fd 2...
	I1222 00:12:13.616829 1890099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:13.617124 1890099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:12:13.617506 1890099 mustload.go:66] Loading cluster: addons-568394
	I1222 00:12:13.617956 1890099 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:13.618000 1890099 addons.go:622] checking whether the cluster is paused
	I1222 00:12:13.618138 1890099 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:13.618168 1890099 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:12:13.618708 1890099 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:12:13.636733 1890099 ssh_runner.go:195] Run: systemctl --version
	I1222 00:12:13.636782 1890099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:12:13.654322 1890099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:12:13.755773 1890099 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:12:13.755854 1890099 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:12:13.786417 1890099 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:12:13.786440 1890099 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:12:13.786445 1890099 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:12:13.786451 1890099 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:12:13.786455 1890099 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:12:13.786459 1890099 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:12:13.786462 1890099 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:12:13.786465 1890099 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:12:13.786468 1890099 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:12:13.786476 1890099 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:12:13.786479 1890099 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:12:13.786482 1890099 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:12:13.786486 1890099 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:12:13.786489 1890099 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:12:13.786493 1890099 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:12:13.786501 1890099 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:12:13.786504 1890099 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:12:13.786509 1890099 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:12:13.786517 1890099 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:12:13.786520 1890099 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:12:13.786525 1890099 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:12:13.786533 1890099 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:12:13.786536 1890099 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:12:13.786539 1890099 cri.go:96] found id: ""
	I1222 00:12:13.786591 1890099 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:12:13.803204 1890099 out.go:203] 
	W1222 00:12:13.806182 1890099 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:12:13.806207 1890099 out.go:285] * 
	* 
	W1222 00:12:13.824366 1890099 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:12:13.827678 1890099 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (142.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-568394 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-568394 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-568394 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [ceb50df5-fc91-4719-93b8-68fa45accfbf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [ceb50df5-fc91-4719-93b8-68fa45accfbf] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.005987496s
I1222 00:12:12.099056 1881219 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.217887968s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-568394 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-568394
helpers_test.go:244: (dbg) docker inspect addons-568394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1",
	        "Created": "2025-12-22T00:09:36.012946174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1882620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:09:36.080207369Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/hosts",
	        "LogPath": "/var/lib/docker/containers/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1-json.log",
	        "Name": "/addons-568394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-568394:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-568394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1",
	                "LowerDir": "/var/lib/docker/overlay2/b98a7af9a8c55160d44123250135c599913de21055282536b8a8b33d976aabdb-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b98a7af9a8c55160d44123250135c599913de21055282536b8a8b33d976aabdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b98a7af9a8c55160d44123250135c599913de21055282536b8a8b33d976aabdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b98a7af9a8c55160d44123250135c599913de21055282536b8a8b33d976aabdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-568394",
	                "Source": "/var/lib/docker/volumes/addons-568394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-568394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-568394",
	                "name.minikube.sigs.k8s.io": "addons-568394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df8371da999f17342f4b44aff0193d128420bd3236365605cccfa2f2e0ddbecb",
	            "SandboxKey": "/var/run/docker/netns/df8371da999f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38458"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-568394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:cd:35:b0:5b:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a087820eb8ed70c451d6da6176a5e2b9ebfc3067b99634f05c9c1cdf6468bcf",
	                    "EndpointID": "d30364cbb6ad6ba09a13b40de31e5967424a7d04cb19fcc6341eab17d9689495",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-568394",
	                        "a413d8502110"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-568394 -n addons-568394
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-568394 logs -n 25: (1.408587983s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-892972                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-892972 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ start   │ --download-only -p binary-mirror-854679 --alsologtostderr --binary-mirror http://127.0.0.1:46683 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-854679   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ -p binary-mirror-854679                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-854679   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ addons  │ disable dashboard -p addons-568394                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ addons  │ enable dashboard -p addons-568394                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ start   │ -p addons-568394 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:11 UTC │
	│ addons  │ addons-568394 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	│ addons  │ addons-568394 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	│ addons  │ enable headlamp -p addons-568394 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	│ addons  │ addons-568394 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	│ ip      │ addons-568394 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │ 22 Dec 25 00:11 UTC │
	│ addons  │ addons-568394 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	│ addons  │ addons-568394 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	│ addons  │ addons-568394 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ ssh     │ addons-568394 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ addons  │ addons-568394 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ addons  │ addons-568394 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-568394                                                                                                                                                                                                                                                                                                                                                                                           │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │ 22 Dec 25 00:12 UTC │
	│ addons  │ addons-568394 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ addons  │ addons-568394 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ addons  │ addons-568394 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ ssh     │ addons-568394 ssh cat /opt/local-path-provisioner/pvc-3ef0a689-774b-47cf-ad57-4a455b94d11d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │ 22 Dec 25 00:12 UTC │
	│ addons  │ addons-568394 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ addons  │ addons-568394 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:12 UTC │                     │
	│ ip      │ addons-568394 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:14 UTC │ 22 Dec 25 00:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:09:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:09:30.092123 1882225 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:09:30.092261 1882225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:30.092272 1882225 out.go:374] Setting ErrFile to fd 2...
	I1222 00:09:30.092278 1882225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:30.092642 1882225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:09:30.093194 1882225 out.go:368] Setting JSON to false
	I1222 00:09:30.094093 1882225 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":111119,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:09:30.094636 1882225 start.go:143] virtualization:  
	I1222 00:09:30.097961 1882225 out.go:179] * [addons-568394] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:09:30.101695 1882225 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:09:30.101786 1882225 notify.go:221] Checking for updates...
	I1222 00:09:30.107481 1882225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:09:30.110511 1882225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:09:30.113432 1882225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:09:30.116250 1882225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:09:30.119198 1882225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:09:30.122422 1882225 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:09:30.158044 1882225 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:09:30.158186 1882225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:30.218626 1882225 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-22 00:09:30.208867904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:30.218734 1882225 docker.go:319] overlay module found
	I1222 00:09:30.221927 1882225 out.go:179] * Using the docker driver based on user configuration
	I1222 00:09:30.224737 1882225 start.go:309] selected driver: docker
	I1222 00:09:30.224756 1882225 start.go:928] validating driver "docker" against <nil>
	I1222 00:09:30.224771 1882225 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:09:30.225576 1882225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:30.287368 1882225 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-22 00:09:30.278430721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:30.287522 1882225 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:09:30.287740 1882225 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:09:30.290752 1882225 out.go:179] * Using Docker driver with root privileges
	I1222 00:09:30.293539 1882225 cni.go:84] Creating CNI manager for ""
	I1222 00:09:30.293606 1882225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:09:30.293619 1882225 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 00:09:30.293701 1882225 start.go:353] cluster config:
	{Name:addons-568394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1222 00:09:30.296817 1882225 out.go:179] * Starting "addons-568394" primary control-plane node in "addons-568394" cluster
	I1222 00:09:30.299619 1882225 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:09:30.302433 1882225 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:09:30.305122 1882225 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 00:09:30.305164 1882225 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 00:09:30.305177 1882225 cache.go:65] Caching tarball of preloaded images
	I1222 00:09:30.305197 1882225 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:09:30.305256 1882225 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:09:30.305279 1882225 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 00:09:30.305617 1882225 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/config.json ...
	I1222 00:09:30.305692 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/config.json: {Name:mkf6221c0ab697d4028b31a8261a2450c5619982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:30.324013 1882225 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:09:30.324035 1882225 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:09:30.324054 1882225 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:09:30.324091 1882225 start.go:360] acquireMachinesLock for addons-568394: {Name:mkd931fa31da2a4323eebbe142e9972411438b59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:09:30.324218 1882225 start.go:364] duration metric: took 93.06µs to acquireMachinesLock for "addons-568394"
	I1222 00:09:30.324249 1882225 start.go:93] Provisioning new machine with config: &{Name:addons-568394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:09:30.324319 1882225 start.go:125] createHost starting for "" (driver="docker")
	I1222 00:09:30.327587 1882225 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1222 00:09:30.327828 1882225 start.go:159] libmachine.API.Create for "addons-568394" (driver="docker")
	I1222 00:09:30.327870 1882225 client.go:173] LocalClient.Create starting
	I1222 00:09:30.327996 1882225 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 00:09:30.571719 1882225 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 00:09:30.734596 1882225 cli_runner.go:164] Run: docker network inspect addons-568394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 00:09:30.752030 1882225 cli_runner.go:211] docker network inspect addons-568394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 00:09:30.752126 1882225 network_create.go:284] running [docker network inspect addons-568394] to gather additional debugging logs...
	I1222 00:09:30.752146 1882225 cli_runner.go:164] Run: docker network inspect addons-568394
	W1222 00:09:30.769656 1882225 cli_runner.go:211] docker network inspect addons-568394 returned with exit code 1
	I1222 00:09:30.769699 1882225 network_create.go:287] error running [docker network inspect addons-568394]: docker network inspect addons-568394: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-568394 not found
	I1222 00:09:30.769712 1882225 network_create.go:289] output of [docker network inspect addons-568394]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-568394 not found
	
	** /stderr **
	I1222 00:09:30.769814 1882225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:09:30.787985 1882225 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d2740}
	I1222 00:09:30.788034 1882225 network_create.go:124] attempt to create docker network addons-568394 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1222 00:09:30.788089 1882225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-568394 addons-568394
	I1222 00:09:30.849632 1882225 network_create.go:108] docker network addons-568394 192.168.49.0/24 created
	I1222 00:09:30.849665 1882225 kic.go:121] calculated static IP "192.168.49.2" for the "addons-568394" container
	I1222 00:09:30.849737 1882225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 00:09:30.865028 1882225 cli_runner.go:164] Run: docker volume create addons-568394 --label name.minikube.sigs.k8s.io=addons-568394 --label created_by.minikube.sigs.k8s.io=true
	I1222 00:09:30.882233 1882225 oci.go:103] Successfully created a docker volume addons-568394
	I1222 00:09:30.882315 1882225 cli_runner.go:164] Run: docker run --rm --name addons-568394-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568394 --entrypoint /usr/bin/test -v addons-568394:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 00:09:31.992356 1882225 cli_runner.go:217] Completed: docker run --rm --name addons-568394-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568394 --entrypoint /usr/bin/test -v addons-568394:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib: (1.110001063s)
	I1222 00:09:31.992389 1882225 oci.go:107] Successfully prepared a docker volume addons-568394
	I1222 00:09:31.992442 1882225 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 00:09:31.992457 1882225 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 00:09:31.992531 1882225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-568394:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 00:09:35.930569 1882225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-568394:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.937986805s)
	I1222 00:09:35.930607 1882225 kic.go:203] duration metric: took 3.938146374s to extract preloaded images to volume ...
	W1222 00:09:35.930763 1882225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 00:09:35.930888 1882225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 00:09:35.993368 1882225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-568394 --name addons-568394 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568394 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-568394 --network addons-568394 --ip 192.168.49.2 --volume addons-568394:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 00:09:36.282470 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Running}}
	I1222 00:09:36.301729 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:09:36.325796 1882225 cli_runner.go:164] Run: docker exec addons-568394 stat /var/lib/dpkg/alternatives/iptables
	I1222 00:09:36.379185 1882225 oci.go:144] the created container "addons-568394" has a running status.
	I1222 00:09:36.379219 1882225 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa...
	I1222 00:09:36.800074 1882225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 00:09:36.830546 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:09:36.854478 1882225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 00:09:36.854518 1882225 kic_runner.go:114] Args: [docker exec --privileged addons-568394 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 00:09:36.896385 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:09:36.913840 1882225 machine.go:94] provisionDockerMachine start ...
	I1222 00:09:36.913952 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:36.931093 1882225 main.go:144] libmachine: Using SSH client type: native
	I1222 00:09:36.931445 1882225 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38456 <nil> <nil>}
	I1222 00:09:36.931461 1882225 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:09:36.932094 1882225 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 00:09:40.081246 1882225 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-568394
	
	I1222 00:09:40.081272 1882225 ubuntu.go:182] provisioning hostname "addons-568394"
	I1222 00:09:40.081374 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:40.105602 1882225 main.go:144] libmachine: Using SSH client type: native
	I1222 00:09:40.105938 1882225 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38456 <nil> <nil>}
	I1222 00:09:40.105955 1882225 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-568394 && echo "addons-568394" | sudo tee /etc/hostname
	I1222 00:09:40.251205 1882225 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-568394
	
	I1222 00:09:40.251280 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:40.272553 1882225 main.go:144] libmachine: Using SSH client type: native
	I1222 00:09:40.272889 1882225 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38456 <nil> <nil>}
	I1222 00:09:40.272911 1882225 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-568394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-568394/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-568394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:09:40.405520 1882225 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:09:40.405555 1882225 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:09:40.405580 1882225 ubuntu.go:190] setting up certificates
	I1222 00:09:40.405596 1882225 provision.go:84] configureAuth start
	I1222 00:09:40.405660 1882225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568394
	I1222 00:09:40.422116 1882225 provision.go:143] copyHostCerts
	I1222 00:09:40.422201 1882225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:09:40.422313 1882225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:09:40.422373 1882225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:09:40.422423 1882225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.addons-568394 san=[127.0.0.1 192.168.49.2 addons-568394 localhost minikube]
	I1222 00:09:40.926868 1882225 provision.go:177] copyRemoteCerts
	I1222 00:09:40.926934 1882225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:09:40.926974 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:40.943704 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:41.041006 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:09:41.060237 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1222 00:09:41.076544 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:09:41.092823 1882225 provision.go:87] duration metric: took 687.197497ms to configureAuth
	I1222 00:09:41.092892 1882225 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:09:41.093102 1882225 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:09:41.093203 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:41.110013 1882225 main.go:144] libmachine: Using SSH client type: native
	I1222 00:09:41.110321 1882225 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38456 <nil> <nil>}
	I1222 00:09:41.110339 1882225 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:09:41.736796 1882225 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:09:41.736861 1882225 machine.go:97] duration metric: took 4.822992527s to provisionDockerMachine
	I1222 00:09:41.736900 1882225 client.go:176] duration metric: took 11.409005155s to LocalClient.Create
	I1222 00:09:41.736948 1882225 start.go:167] duration metric: took 11.409121s to libmachine.API.Create "addons-568394"
	I1222 00:09:41.736974 1882225 start.go:293] postStartSetup for "addons-568394" (driver="docker")
	I1222 00:09:41.737011 1882225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:09:41.737136 1882225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:09:41.737261 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:41.754205 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:41.849196 1882225 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:09:41.852377 1882225 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:09:41.852406 1882225 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:09:41.852418 1882225 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:09:41.852519 1882225 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:09:41.852553 1882225 start.go:296] duration metric: took 115.546851ms for postStartSetup
	I1222 00:09:41.852884 1882225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568394
	I1222 00:09:41.869222 1882225 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/config.json ...
	I1222 00:09:41.869536 1882225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:09:41.869597 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:41.889261 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:41.986944 1882225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:09:41.991487 1882225 start.go:128] duration metric: took 11.66714782s to createHost
	I1222 00:09:41.991513 1882225 start.go:83] releasing machines lock for "addons-568394", held for 11.66728183s
	I1222 00:09:41.991587 1882225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568394
	I1222 00:09:42.014100 1882225 ssh_runner.go:195] Run: cat /version.json
	I1222 00:09:42.014165 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:42.014475 1882225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:09:42.014551 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:42.043413 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:42.053034 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:42.150697 1882225 ssh_runner.go:195] Run: systemctl --version
	I1222 00:09:42.300167 1882225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:09:42.353379 1882225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:09:42.358472 1882225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:09:42.358617 1882225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:09:42.392361 1882225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 00:09:42.392438 1882225 start.go:496] detecting cgroup driver to use...
	I1222 00:09:42.392486 1882225 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:09:42.392587 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:09:42.411099 1882225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:09:42.423896 1882225 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:09:42.423958 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:09:42.441664 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:09:42.460000 1882225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:09:42.581955 1882225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:09:42.706497 1882225 docker.go:234] disabling docker service ...
	I1222 00:09:42.706608 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:09:42.732037 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:09:42.745192 1882225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:09:42.876402 1882225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:09:43.008758 1882225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:09:43.022733 1882225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:09:43.036946 1882225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:09:43.037035 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.045835 1882225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:09:43.045902 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.054509 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.062786 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.071638 1882225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:09:43.079736 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.088443 1882225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.101669 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.110304 1882225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:09:43.118734 1882225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:09:43.125892 1882225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:09:43.245807 1882225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:09:43.417302 1882225 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:09:43.417476 1882225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:09:43.421286 1882225 start.go:564] Will wait 60s for crictl version
	I1222 00:09:43.421390 1882225 ssh_runner.go:195] Run: which crictl
	I1222 00:09:43.424940 1882225 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:09:43.456739 1882225 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:09:43.456910 1882225 ssh_runner.go:195] Run: crio --version
	I1222 00:09:43.487458 1882225 ssh_runner.go:195] Run: crio --version
	I1222 00:09:43.524870 1882225 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 00:09:43.527743 1882225 cli_runner.go:164] Run: docker network inspect addons-568394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:09:43.543405 1882225 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:09:43.547421 1882225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 00:09:43.557008 1882225 kubeadm.go:884] updating cluster {Name:addons-568394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:09:43.557126 1882225 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 00:09:43.557185 1882225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:09:43.601035 1882225 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:09:43.601058 1882225 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:09:43.601116 1882225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:09:43.626853 1882225 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:09:43.626879 1882225 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:09:43.626888 1882225 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1222 00:09:43.626974 1882225 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-568394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:09:43.627057 1882225 ssh_runner.go:195] Run: crio config
	I1222 00:09:43.698400 1882225 cni.go:84] Creating CNI manager for ""
	I1222 00:09:43.698423 1882225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:09:43.698444 1882225 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:09:43.698486 1882225 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-568394 NodeName:addons-568394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:09:43.698645 1882225 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-568394"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:09:43.698719 1882225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 00:09:43.707424 1882225 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:09:43.707497 1882225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:09:43.714927 1882225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1222 00:09:43.727546 1882225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 00:09:43.739976 1882225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1222 00:09:43.752466 1882225 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:09:43.756081 1882225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 00:09:43.765242 1882225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:09:43.879267 1882225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:09:43.895146 1882225 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394 for IP: 192.168.49.2
	I1222 00:09:43.895169 1882225 certs.go:195] generating shared ca certs ...
	I1222 00:09:43.895185 1882225 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:43.895366 1882225 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:09:44.114888 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt ...
	I1222 00:09:44.114922 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt: {Name:mk199eb237eff97383e8528ff6c07c45d31d4b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.115135 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key ...
	I1222 00:09:44.115150 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key: {Name:mka96d6e3c8770000fbb53f13617293bc772777a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.115241 1882225 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:09:44.286676 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt ...
	I1222 00:09:44.286705 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt: {Name:mk53b4c41f4966d6848c53a3628bf41792ce1991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.286879 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key ...
	I1222 00:09:44.286891 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key: {Name:mk45a334a5e477374ae3052171b7454a5db10fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.286972 1882225 certs.go:257] generating profile certs ...
	I1222 00:09:44.287035 1882225 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.key
	I1222 00:09:44.287051 1882225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt with IP's: []
	I1222 00:09:44.355565 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt ...
	I1222 00:09:44.355591 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: {Name:mk5bf503a5f32042024c92f01a32d3a4f3d760d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.355766 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.key ...
	I1222 00:09:44.355780 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.key: {Name:mk4d333b1c7588ab00a81adc489e41d7bf26c44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.355863 1882225 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key.796e61e1
	I1222 00:09:44.355884 1882225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt.796e61e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1222 00:09:44.470753 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt.796e61e1 ...
	I1222 00:09:44.470786 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt.796e61e1: {Name:mk4138bafddab23f0b2abd1677fe834d17f22e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.470963 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key.796e61e1 ...
	I1222 00:09:44.470978 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key.796e61e1: {Name:mkd67fe0a0c2234cb58b7487a99841414560f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.471064 1882225 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt.796e61e1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt
	I1222 00:09:44.471149 1882225 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key.796e61e1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key
	I1222 00:09:44.471205 1882225 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.key
	I1222 00:09:44.471222 1882225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.crt with IP's: []
	I1222 00:09:45.082545 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.crt ...
	I1222 00:09:45.082582 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.crt: {Name:mka5c990ec669d60475595f0992e7645039ee68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:45.082788 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.key ...
	I1222 00:09:45.082801 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.key: {Name:mk23720dbc5119a1bfc55a4799448e475b8f2236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:45.083008 1882225 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:09:45.083067 1882225 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:09:45.083105 1882225 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:09:45.083146 1882225 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:09:45.083793 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:09:45.110937 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:09:45.137081 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:09:45.161188 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:09:45.194761 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1222 00:09:45.229896 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 00:09:45.259858 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:09:45.285203 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:09:45.309887 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:09:45.329733 1882225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:09:45.348031 1882225 ssh_runner.go:195] Run: openssl version
	I1222 00:09:45.364110 1882225 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:09:45.374811 1882225 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:09:45.387263 1882225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:09:45.392895 1882225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:09:45.393011 1882225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:09:45.445532 1882225 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:09:45.455565 1882225 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 00:09:45.463677 1882225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:09:45.467165 1882225 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 00:09:45.467216 1882225 kubeadm.go:401] StartCluster: {Name:addons-568394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:09:45.467291 1882225 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:09:45.467351 1882225 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:09:45.494656 1882225 cri.go:96] found id: ""
	I1222 00:09:45.494768 1882225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:09:45.502581 1882225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:09:45.510441 1882225 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:09:45.510504 1882225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:09:45.518351 1882225 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:09:45.518372 1882225 kubeadm.go:158] found existing configuration files:
	
	I1222 00:09:45.518421 1882225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 00:09:45.526123 1882225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:09:45.526216 1882225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:09:45.534143 1882225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 00:09:45.541796 1882225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:09:45.541888 1882225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:09:45.549127 1882225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 00:09:45.556595 1882225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:09:45.556687 1882225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:09:45.564182 1882225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 00:09:45.571909 1882225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:09:45.571998 1882225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:09:45.579565 1882225 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:09:45.622935 1882225 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 00:09:45.622993 1882225 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:09:45.645728 1882225 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:09:45.645811 1882225 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:09:45.645854 1882225 kubeadm.go:319] OS: Linux
	I1222 00:09:45.645908 1882225 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:09:45.645964 1882225 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:09:45.646027 1882225 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:09:45.646086 1882225 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:09:45.646138 1882225 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:09:45.646189 1882225 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:09:45.646247 1882225 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:09:45.646304 1882225 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:09:45.646357 1882225 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:09:45.710195 1882225 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:09:45.710316 1882225 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:09:45.710412 1882225 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:09:45.718622 1882225 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:09:45.724282 1882225 out.go:252]   - Generating certificates and keys ...
	I1222 00:09:45.724448 1882225 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:09:45.724563 1882225 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:09:46.082520 1882225 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 00:09:46.625536 1882225 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 00:09:46.890394 1882225 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 00:09:47.380002 1882225 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 00:09:48.124909 1882225 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 00:09:48.125288 1882225 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-568394 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 00:09:49.912284 1882225 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 00:09:49.912639 1882225 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-568394 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 00:09:50.888500 1882225 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 00:09:51.197769 1882225 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 00:09:51.616166 1882225 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 00:09:51.616453 1882225 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:09:51.943773 1882225 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:09:52.272456 1882225 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:09:52.867850 1882225 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:09:53.245460 1882225 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:09:53.402093 1882225 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:09:53.402712 1882225 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:09:53.407209 1882225 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:09:53.411054 1882225 out.go:252]   - Booting up control plane ...
	I1222 00:09:53.411157 1882225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:09:53.411233 1882225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:09:53.412054 1882225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:09:53.433885 1882225 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:09:53.434315 1882225 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:09:53.441727 1882225 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:09:53.442053 1882225 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:09:53.442100 1882225 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:09:53.565266 1882225 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:09:53.565410 1882225 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:09:55.066153 1882225 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500946045s
	I1222 00:09:55.069612 1882225 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 00:09:55.069730 1882225 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1222 00:09:55.069825 1882225 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 00:09:55.069913 1882225 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1222 00:09:58.870530 1882225 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.80047348s
	I1222 00:10:00.829675 1882225 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.76013097s
	I1222 00:10:01.571429 1882225 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501579933s
	I1222 00:10:01.608579 1882225 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 00:10:01.625234 1882225 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 00:10:01.639815 1882225 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 00:10:01.640026 1882225 kubeadm.go:319] [mark-control-plane] Marking the node addons-568394 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 00:10:01.656494 1882225 kubeadm.go:319] [bootstrap-token] Using token: tibsgw.42l7uf9d6um2buj4
	I1222 00:10:01.659561 1882225 out.go:252]   - Configuring RBAC rules ...
	I1222 00:10:01.659700 1882225 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 00:10:01.664558 1882225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 00:10:01.674909 1882225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 00:10:01.679760 1882225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 00:10:01.684432 1882225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 00:10:01.691259 1882225 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 00:10:01.981312 1882225 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 00:10:02.415271 1882225 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 00:10:02.978770 1882225 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 00:10:02.979995 1882225 kubeadm.go:319] 
	I1222 00:10:02.980069 1882225 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 00:10:02.980081 1882225 kubeadm.go:319] 
	I1222 00:10:02.980154 1882225 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 00:10:02.980161 1882225 kubeadm.go:319] 
	I1222 00:10:02.980185 1882225 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 00:10:02.980251 1882225 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 00:10:02.980302 1882225 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 00:10:02.980309 1882225 kubeadm.go:319] 
	I1222 00:10:02.980360 1882225 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 00:10:02.980368 1882225 kubeadm.go:319] 
	I1222 00:10:02.980412 1882225 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 00:10:02.980421 1882225 kubeadm.go:319] 
	I1222 00:10:02.980470 1882225 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 00:10:02.980543 1882225 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 00:10:02.980614 1882225 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 00:10:02.980623 1882225 kubeadm.go:319] 
	I1222 00:10:02.980702 1882225 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 00:10:02.980777 1882225 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 00:10:02.980785 1882225 kubeadm.go:319] 
	I1222 00:10:02.980863 1882225 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tibsgw.42l7uf9d6um2buj4 \
	I1222 00:10:02.980964 1882225 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d \
	I1222 00:10:02.980987 1882225 kubeadm.go:319] 	--control-plane 
	I1222 00:10:02.980992 1882225 kubeadm.go:319] 
	I1222 00:10:02.981071 1882225 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 00:10:02.981078 1882225 kubeadm.go:319] 
	I1222 00:10:02.981160 1882225 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tibsgw.42l7uf9d6um2buj4 \
	I1222 00:10:02.981259 1882225 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d 
	I1222 00:10:02.985715 1882225 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 00:10:02.985931 1882225 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:10:02.986040 1882225 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:10:02.986056 1882225 cni.go:84] Creating CNI manager for ""
	I1222 00:10:02.986063 1882225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:10:02.989413 1882225 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1222 00:10:02.992388 1882225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1222 00:10:02.996616 1882225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1222 00:10:02.996636 1882225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1222 00:10:03.013275 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1222 00:10:03.301132 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:03.301245 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-568394 minikube.k8s.io/updated_at=2025_12_22T00_10_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=addons-568394 minikube.k8s.io/primary=true
	I1222 00:10:03.301042 1882225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 00:10:03.447642 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:03.447651 1882225 ops.go:34] apiserver oom_adj: -16
	I1222 00:10:03.948556 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:04.448663 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:04.948063 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:05.448313 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:05.948598 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:06.448498 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:06.948528 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:07.448599 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:07.947761 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:08.447743 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:08.593775 1882225 kubeadm.go:1114] duration metric: took 5.292692913s to wait for elevateKubeSystemPrivileges
	I1222 00:10:08.593805 1882225 kubeadm.go:403] duration metric: took 23.126593097s to StartCluster
	I1222 00:10:08.593822 1882225 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:10:08.593943 1882225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:10:08.594376 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:10:08.594573 1882225 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:10:08.594769 1882225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 00:10:08.595029 1882225 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:10:08.595059 1882225 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1222 00:10:08.595137 1882225 addons.go:70] Setting yakd=true in profile "addons-568394"
	I1222 00:10:08.595156 1882225 addons.go:239] Setting addon yakd=true in "addons-568394"
	I1222 00:10:08.595177 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.595665 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.596087 1882225 addons.go:70] Setting inspektor-gadget=true in profile "addons-568394"
	I1222 00:10:08.596114 1882225 addons.go:239] Setting addon inspektor-gadget=true in "addons-568394"
	I1222 00:10:08.596147 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.596586 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.596802 1882225 addons.go:70] Setting metrics-server=true in profile "addons-568394"
	I1222 00:10:08.596820 1882225 addons.go:239] Setting addon metrics-server=true in "addons-568394"
	I1222 00:10:08.596846 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.597245 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.599383 1882225 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-568394"
	I1222 00:10:08.599424 1882225 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-568394"
	I1222 00:10:08.599566 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.599860 1882225 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-568394"
	I1222 00:10:08.599891 1882225 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-568394"
	I1222 00:10:08.599915 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.600340 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.600489 1882225 addons.go:70] Setting cloud-spanner=true in profile "addons-568394"
	I1222 00:10:08.600506 1882225 addons.go:239] Setting addon cloud-spanner=true in "addons-568394"
	I1222 00:10:08.600532 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.600916 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.606844 1882225 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-568394"
	I1222 00:10:08.606924 1882225 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-568394"
	I1222 00:10:08.606958 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.607462 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.607843 1882225 addons.go:70] Setting registry=true in profile "addons-568394"
	I1222 00:10:08.607867 1882225 addons.go:239] Setting addon registry=true in "addons-568394"
	I1222 00:10:08.607895 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.608315 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.621373 1882225 addons.go:70] Setting default-storageclass=true in profile "addons-568394"
	I1222 00:10:08.621407 1882225 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-568394"
	I1222 00:10:08.621749 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.629184 1882225 addons.go:70] Setting registry-creds=true in profile "addons-568394"
	I1222 00:10:08.629224 1882225 addons.go:239] Setting addon registry-creds=true in "addons-568394"
	I1222 00:10:08.629260 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.629761 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.640468 1882225 addons.go:70] Setting gcp-auth=true in profile "addons-568394"
	I1222 00:10:08.640517 1882225 mustload.go:66] Loading cluster: addons-568394
	I1222 00:10:08.640740 1882225 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:10:08.640994 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.650124 1882225 addons.go:70] Setting storage-provisioner=true in profile "addons-568394"
	I1222 00:10:08.650170 1882225 addons.go:239] Setting addon storage-provisioner=true in "addons-568394"
	I1222 00:10:08.650205 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.650673 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.653397 1882225 addons.go:70] Setting ingress=true in profile "addons-568394"
	I1222 00:10:08.653429 1882225 addons.go:239] Setting addon ingress=true in "addons-568394"
	I1222 00:10:08.653468 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.653944 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.675223 1882225 addons.go:70] Setting ingress-dns=true in profile "addons-568394"
	I1222 00:10:08.675256 1882225 addons.go:239] Setting addon ingress-dns=true in "addons-568394"
	I1222 00:10:08.675304 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.675774 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.676484 1882225 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-568394"
	I1222 00:10:08.676517 1882225 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-568394"
	I1222 00:10:08.676824 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.723206 1882225 out.go:179] * Verifying Kubernetes components...
	I1222 00:10:08.724345 1882225 addons.go:70] Setting volcano=true in profile "addons-568394"
	I1222 00:10:08.724379 1882225 addons.go:239] Setting addon volcano=true in "addons-568394"
	I1222 00:10:08.724421 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.724869 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.727443 1882225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:10:08.727920 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.756454 1882225 addons.go:70] Setting volumesnapshots=true in profile "addons-568394"
	I1222 00:10:08.756529 1882225 addons.go:239] Setting addon volumesnapshots=true in "addons-568394"
	I1222 00:10:08.756580 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.757097 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.801751 1882225 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1222 00:10:08.805460 1882225 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1222 00:10:08.805485 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1222 00:10:08.805552 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.860641 1882225 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1222 00:10:08.867986 1882225 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1222 00:10:08.868019 1882225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1222 00:10:08.868121 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.888892 1882225 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1222 00:10:08.894664 1882225 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1222 00:10:08.895425 1882225 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1222 00:10:08.895467 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1222 00:10:08.895566 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.916103 1882225 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1222 00:10:08.925187 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1222 00:10:08.925255 1882225 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1222 00:10:08.925369 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.932280 1882225 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1222 00:10:08.932753 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1222 00:10:08.949744 1882225 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1222 00:10:08.957997 1882225 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1222 00:10:08.958064 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1222 00:10:08.958166 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.962351 1882225 addons.go:239] Setting addon default-storageclass=true in "addons-568394"
	I1222 00:10:08.967505 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.968070 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.967264 1882225 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1222 00:10:09.004786 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1222 00:10:09.004869 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.005308 1882225 out.go:179]   - Using image docker.io/registry:3.0.0
	I1222 00:10:09.006568 1882225 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-568394"
	I1222 00:10:09.006677 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:09.007405 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:09.039750 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1222 00:10:09.043303 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	W1222 00:10:08.967348 1882225 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1222 00:10:09.047463 1882225 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:10:09.047626 1882225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1222 00:10:09.051150 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1222 00:10:09.048349 1882225 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1222 00:10:09.051354 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1222 00:10:09.051434 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.048431 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.059465 1882225 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:10:09.059481 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:10:09.059537 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.064588 1882225 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1222 00:10:09.065555 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1222 00:10:09.065702 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1222 00:10:09.065883 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:09.111665 1882225 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1222 00:10:09.111686 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1222 00:10:09.111753 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.158910 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1222 00:10:09.158930 1882225 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1222 00:10:09.158996 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.162679 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.163497 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.168544 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.169241 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1222 00:10:09.169376 1882225 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1222 00:10:09.170063 1882225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1222 00:10:09.173888 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1222 00:10:09.174077 1882225 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1222 00:10:09.174089 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1222 00:10:09.174156 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.185233 1882225 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:10:09.185257 1882225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:10:09.185315 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.186333 1882225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1222 00:10:09.188685 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1222 00:10:09.189262 1882225 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1222 00:10:09.189284 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1222 00:10:09.189413 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.197639 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1222 00:10:09.197661 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1222 00:10:09.197725 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.205375 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.224274 1882225 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1222 00:10:09.228756 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.236435 1882225 out.go:179]   - Using image docker.io/busybox:stable
	I1222 00:10:09.241441 1882225 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1222 00:10:09.241475 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1222 00:10:09.241542 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.253906 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.278767 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.339738 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.340241 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.352483 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.352833 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.353978 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	W1222 00:10:09.355305 1882225 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1222 00:10:09.355340 1882225 retry.go:84] will retry after 300ms: ssh: handshake failed: EOF
	I1222 00:10:09.362376 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.371700 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.390720 1882225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:10:09.390795 1882225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 00:10:09.980058 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1222 00:10:09.980125 1882225 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1222 00:10:09.988342 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1222 00:10:09.994287 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:10:10.013910 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1222 00:10:10.095561 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1222 00:10:10.146056 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1222 00:10:10.218966 1882225 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1222 00:10:10.218986 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1222 00:10:10.256835 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1222 00:10:10.275254 1882225 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1222 00:10:10.275336 1882225 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1222 00:10:10.283459 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1222 00:10:10.283535 1882225 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1222 00:10:10.283951 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:10:10.348458 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1222 00:10:10.379977 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1222 00:10:10.449311 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1222 00:10:10.575159 1882225 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1222 00:10:10.575188 1882225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1222 00:10:10.632395 1882225 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1222 00:10:10.632428 1882225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1222 00:10:10.654475 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1222 00:10:10.654501 1882225 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1222 00:10:10.661423 1882225 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1222 00:10:10.661451 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1222 00:10:10.807425 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1222 00:10:10.807491 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1222 00:10:10.863325 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1222 00:10:10.917629 1882225 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1222 00:10:10.917700 1882225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1222 00:10:10.938740 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1222 00:10:10.938814 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1222 00:10:11.078709 1882225 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1222 00:10:11.078789 1882225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1222 00:10:11.097420 1882225 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1222 00:10:11.097486 1882225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1222 00:10:11.192972 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1222 00:10:11.193046 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1222 00:10:11.235255 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1222 00:10:11.350367 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1222 00:10:11.397073 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1222 00:10:11.397147 1882225 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1222 00:10:11.460012 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1222 00:10:11.460087 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1222 00:10:11.608360 1882225 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1222 00:10:11.608437 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1222 00:10:11.672875 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1222 00:10:11.957648 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1222 00:10:11.957720 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1222 00:10:12.208416 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1222 00:10:12.208487 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1222 00:10:12.485584 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1222 00:10:12.485655 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1222 00:10:12.536380 1882225 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.145557426s)
	I1222 00:10:12.536449 1882225 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1222 00:10:12.537435 1882225 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.146681094s)
	I1222 00:10:12.537511 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.549096575s)
	I1222 00:10:12.538245 1882225 node_ready.go:35] waiting up to 6m0s for node "addons-568394" to be "Ready" ...
	I1222 00:10:12.787769 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1222 00:10:12.787806 1882225 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1222 00:10:13.046566 1882225 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-568394" context rescaled to 1 replicas
	I1222 00:10:13.166647 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1222 00:10:13.166675 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1222 00:10:13.304902 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1222 00:10:13.304926 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1222 00:10:13.486858 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1222 00:10:13.486928 1882225 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1222 00:10:13.657967 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1222 00:10:14.211106 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.216743986s)
	W1222 00:10:14.541876 1882225 node_ready.go:57] node "addons-568394" has "Ready":"False" status (will retry)
	I1222 00:10:14.672150 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.658051087s)
	I1222 00:10:14.672274 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.576691997s)
	I1222 00:10:14.672356 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.526281137s)
	I1222 00:10:14.672427 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.415530335s)
	I1222 00:10:14.672676 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.388682195s)
	I1222 00:10:14.672826 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.324287516s)
	I1222 00:10:14.672880 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.292842963s)
	W1222 00:10:14.752410 1882225 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1222 00:10:15.872353 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.422938082s)
	I1222 00:10:15.872429 1882225 addons.go:495] Verifying addon ingress=true in "addons-568394"
	I1222 00:10:15.872610 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.009215824s)
	I1222 00:10:15.872644 1882225 addons.go:495] Verifying addon registry=true in "addons-568394"
	I1222 00:10:15.872921 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.637586209s)
	I1222 00:10:15.873187 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.522731687s)
	I1222 00:10:15.873597 1882225 addons.go:495] Verifying addon metrics-server=true in "addons-568394"
	I1222 00:10:15.873278 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.200329425s)
	W1222 00:10:15.873630 1882225 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1222 00:10:15.873649 1882225 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1222 00:10:15.875376 1882225 out.go:179] * Verifying ingress addon...
	I1222 00:10:15.877559 1882225 out.go:179] * Verifying registry addon...
	I1222 00:10:15.877559 1882225 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-568394 service yakd-dashboard -n yakd-dashboard
	
	I1222 00:10:15.881380 1882225 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1222 00:10:15.883192 1882225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1222 00:10:15.888407 1882225 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1222 00:10:15.888431 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:15.892835 1882225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1222 00:10:15.892858 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:16.123304 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1222 00:10:16.126225 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.468158219s)
	I1222 00:10:16.126298 1882225 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-568394"
	I1222 00:10:16.129234 1882225 out.go:179] * Verifying csi-hostpath-driver addon...
	I1222 00:10:16.133084 1882225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1222 00:10:16.138692 1882225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1222 00:10:16.138757 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:16.387135 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:16.387621 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:16.636336 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:16.676726 1882225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1222 00:10:16.676835 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:16.693863 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:16.802437 1882225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1222 00:10:16.816411 1882225 addons.go:239] Setting addon gcp-auth=true in "addons-568394"
	I1222 00:10:16.816458 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:16.816905 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:16.833752 1882225 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1222 00:10:16.833811 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:16.851263 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:16.888182 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:16.888612 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1222 00:10:17.041234 1882225 node_ready.go:57] node "addons-568394" has "Ready":"False" status (will retry)
	I1222 00:10:17.136278 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:17.385353 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:17.386379 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:17.635819 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:17.885539 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:17.886939 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:18.137024 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:18.386145 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:18.386522 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:18.638619 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:18.815340 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.691947205s)
	I1222 00:10:18.815403 1882225 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.981628895s)
	I1222 00:10:18.818737 1882225 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1222 00:10:18.821575 1882225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1222 00:10:18.824434 1882225 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1222 00:10:18.824457 1882225 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1222 00:10:18.838569 1882225 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1222 00:10:18.838593 1882225 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1222 00:10:18.854090 1882225 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1222 00:10:18.854116 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1222 00:10:18.868837 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1222 00:10:18.887049 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:18.887959 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1222 00:10:19.042398 1882225 node_ready.go:57] node "addons-568394" has "Ready":"False" status (will retry)
	I1222 00:10:19.136454 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:19.361092 1882225 addons.go:495] Verifying addon gcp-auth=true in "addons-568394"
	I1222 00:10:19.364189 1882225 out.go:179] * Verifying gcp-auth addon...
	I1222 00:10:19.367839 1882225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1222 00:10:19.379305 1882225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1222 00:10:19.379378 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:19.478764 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:19.478767 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:19.636106 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:19.871102 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:19.884991 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:19.887146 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:20.136915 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:20.371042 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:20.384836 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:20.388200 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:20.637159 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:20.871195 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:20.885430 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:20.886535 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:21.136836 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:21.370639 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:21.384685 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:21.386550 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1222 00:10:21.541480 1882225 node_ready.go:57] node "addons-568394" has "Ready":"False" status (will retry)
	I1222 00:10:21.636840 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:21.871089 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:21.884727 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:21.886163 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:22.136936 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:22.370899 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:22.384951 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:22.386974 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:22.637749 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:22.871096 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:22.884927 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:22.886039 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:23.137411 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:23.371350 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:23.385381 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:23.386026 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:23.575657 1882225 node_ready.go:49] node "addons-568394" is "Ready"
	I1222 00:10:23.575694 1882225 node_ready.go:38] duration metric: took 11.037398399s for node "addons-568394" to be "Ready" ...
	I1222 00:10:23.575730 1882225 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:10:23.575815 1882225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:10:23.598925 1882225 api_server.go:72] duration metric: took 15.004321773s to wait for apiserver process to appear ...
	I1222 00:10:23.598953 1882225 api_server.go:88] waiting for apiserver healthz status ...
	I1222 00:10:23.598992 1882225 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1222 00:10:23.615350 1882225 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1222 00:10:23.628855 1882225 api_server.go:141] control plane version: v1.34.3
	I1222 00:10:23.628883 1882225 api_server.go:131] duration metric: took 29.923053ms to wait for apiserver health ...
	I1222 00:10:23.628892 1882225 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 00:10:23.673164 1882225 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1222 00:10:23.673191 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:23.674116 1882225 system_pods.go:59] 19 kube-system pods found
	I1222 00:10:23.674179 1882225 system_pods.go:61] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Pending
	I1222 00:10:23.674195 1882225 system_pods.go:61] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending
	I1222 00:10:23.674201 1882225 system_pods.go:61] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending
	I1222 00:10:23.674212 1882225 system_pods.go:61] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending
	I1222 00:10:23.674217 1882225 system_pods.go:61] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:23.674221 1882225 system_pods.go:61] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:23.674225 1882225 system_pods.go:61] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:23.674229 1882225 system_pods.go:61] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:23.674233 1882225 system_pods.go:61] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending
	I1222 00:10:23.674267 1882225 system_pods.go:61] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:23.674279 1882225 system_pods.go:61] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:23.674285 1882225 system_pods.go:61] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending
	I1222 00:10:23.674294 1882225 system_pods.go:61] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending
	I1222 00:10:23.674299 1882225 system_pods.go:61] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending
	I1222 00:10:23.674302 1882225 system_pods.go:61] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending
	I1222 00:10:23.674306 1882225 system_pods.go:61] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending
	I1222 00:10:23.674318 1882225 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending
	I1222 00:10:23.674326 1882225 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending
	I1222 00:10:23.674330 1882225 system_pods.go:61] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Pending
	I1222 00:10:23.674347 1882225 system_pods.go:74] duration metric: took 45.415374ms to wait for pod list to return data ...
	I1222 00:10:23.674360 1882225 default_sa.go:34] waiting for default service account to be created ...
	I1222 00:10:23.689706 1882225 default_sa.go:45] found service account: "default"
	I1222 00:10:23.689734 1882225 default_sa.go:55] duration metric: took 15.367787ms for default service account to be created ...
	I1222 00:10:23.689743 1882225 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 00:10:23.732995 1882225 system_pods.go:86] 19 kube-system pods found
	I1222 00:10:23.733031 1882225 system_pods.go:89] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Pending
	I1222 00:10:23.733039 1882225 system_pods.go:89] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending
	I1222 00:10:23.733043 1882225 system_pods.go:89] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending
	I1222 00:10:23.733070 1882225 system_pods.go:89] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending
	I1222 00:10:23.733078 1882225 system_pods.go:89] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:23.733082 1882225 system_pods.go:89] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:23.733087 1882225 system_pods.go:89] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:23.733091 1882225 system_pods.go:89] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:23.733099 1882225 system_pods.go:89] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending
	I1222 00:10:23.733110 1882225 system_pods.go:89] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:23.733119 1882225 system_pods.go:89] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:23.733123 1882225 system_pods.go:89] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending
	I1222 00:10:23.733127 1882225 system_pods.go:89] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending
	I1222 00:10:23.733145 1882225 system_pods.go:89] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending
	I1222 00:10:23.733159 1882225 system_pods.go:89] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1222 00:10:23.733177 1882225 system_pods.go:89] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending
	I1222 00:10:23.733189 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending
	I1222 00:10:23.733193 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending
	I1222 00:10:23.733198 1882225 system_pods.go:89] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Pending
	I1222 00:10:23.733216 1882225 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1222 00:10:23.925791 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:23.926010 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:23.967190 1882225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1222 00:10:23.967215 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:23.984389 1882225 system_pods.go:86] 19 kube-system pods found
	I1222 00:10:23.984421 1882225 system_pods.go:89] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Pending
	I1222 00:10:23.984428 1882225 system_pods.go:89] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending
	I1222 00:10:23.984432 1882225 system_pods.go:89] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending
	I1222 00:10:23.984474 1882225 system_pods.go:89] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending
	I1222 00:10:23.984485 1882225 system_pods.go:89] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:23.984491 1882225 system_pods.go:89] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:23.984495 1882225 system_pods.go:89] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:23.984500 1882225 system_pods.go:89] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:23.984510 1882225 system_pods.go:89] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending
	I1222 00:10:23.984514 1882225 system_pods.go:89] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:23.984520 1882225 system_pods.go:89] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:23.984541 1882225 system_pods.go:89] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending
	I1222 00:10:23.984560 1882225 system_pods.go:89] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending
	I1222 00:10:23.984573 1882225 system_pods.go:89] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending
	I1222 00:10:23.984587 1882225 system_pods.go:89] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1222 00:10:23.984601 1882225 system_pods.go:89] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending
	I1222 00:10:23.984608 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending
	I1222 00:10:23.984620 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:23.984637 1882225 system_pods.go:89] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 00:10:24.139948 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:24.262984 1882225 system_pods.go:86] 19 kube-system pods found
	I1222 00:10:24.263078 1882225 system_pods.go:89] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 00:10:24.263100 1882225 system_pods.go:89] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending
	I1222 00:10:24.263119 1882225 system_pods.go:89] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending
	I1222 00:10:24.263154 1882225 system_pods.go:89] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1222 00:10:24.263174 1882225 system_pods.go:89] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:24.263193 1882225 system_pods.go:89] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:24.263213 1882225 system_pods.go:89] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:24.263248 1882225 system_pods.go:89] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:24.263269 1882225 system_pods.go:89] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1222 00:10:24.263288 1882225 system_pods.go:89] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:24.263308 1882225 system_pods.go:89] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:24.263342 1882225 system_pods.go:89] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1222 00:10:24.263362 1882225 system_pods.go:89] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1222 00:10:24.263382 1882225 system_pods.go:89] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1222 00:10:24.263412 1882225 system_pods.go:89] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1222 00:10:24.263436 1882225 system_pods.go:89] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1222 00:10:24.263456 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:24.263476 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:24.263507 1882225 system_pods.go:89] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 00:10:24.375688 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:24.394992 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:24.400361 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:24.655523 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:24.741191 1882225 system_pods.go:86] 19 kube-system pods found
	I1222 00:10:24.741281 1882225 system_pods.go:89] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Running
	I1222 00:10:24.741308 1882225 system_pods.go:89] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1222 00:10:24.741343 1882225 system_pods.go:89] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1222 00:10:24.741370 1882225 system_pods.go:89] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1222 00:10:24.741394 1882225 system_pods.go:89] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:24.741422 1882225 system_pods.go:89] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:24.741442 1882225 system_pods.go:89] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:24.741460 1882225 system_pods.go:89] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:24.741499 1882225 system_pods.go:89] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1222 00:10:24.741518 1882225 system_pods.go:89] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:24.741538 1882225 system_pods.go:89] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:24.741572 1882225 system_pods.go:89] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1222 00:10:24.741594 1882225 system_pods.go:89] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1222 00:10:24.741614 1882225 system_pods.go:89] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1222 00:10:24.741637 1882225 system_pods.go:89] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1222 00:10:24.741665 1882225 system_pods.go:89] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1222 00:10:24.741684 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:24.741704 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:24.741738 1882225 system_pods.go:89] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Running
	I1222 00:10:24.741760 1882225 system_pods.go:126] duration metric: took 1.052010226s to wait for k8s-apps to be running ...
	I1222 00:10:24.741780 1882225 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 00:10:24.741856 1882225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:10:24.781154 1882225 system_svc.go:56] duration metric: took 39.364176ms WaitForService to wait for kubelet
	I1222 00:10:24.781237 1882225 kubeadm.go:587] duration metric: took 16.186638773s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:10:24.781274 1882225 node_conditions.go:102] verifying NodePressure condition ...
	I1222 00:10:24.784700 1882225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 00:10:24.784795 1882225 node_conditions.go:123] node cpu capacity is 2
	I1222 00:10:24.784825 1882225 node_conditions.go:105] duration metric: took 3.503883ms to run NodePressure ...
	I1222 00:10:24.784850 1882225 start.go:242] waiting for startup goroutines ...
	I1222 00:10:24.872000 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:24.885114 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:24.887223 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:25.137994 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:25.371493 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:25.385455 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:25.387147 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:25.636993 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:25.871678 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:25.885416 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:25.887869 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:26.138290 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:26.371059 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:26.384880 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:26.387200 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:26.637766 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:26.870728 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:26.885248 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:26.887749 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:27.137311 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:27.372681 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:27.390768 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:27.391327 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:27.637247 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:27.871532 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:27.886102 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:27.888187 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:28.137089 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:28.371717 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:28.387174 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:28.389284 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:28.639388 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:28.871623 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:28.885789 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:28.888481 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:29.140231 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:29.374128 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:29.386361 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:29.390099 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:29.636969 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:29.871257 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:29.886593 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:29.887091 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:30.143233 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:30.372392 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:30.387841 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:30.388290 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:30.637595 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:30.892766 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:30.894007 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:30.894749 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:31.138099 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:31.371732 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:31.385518 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:31.387845 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:31.637478 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:31.871194 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:31.885921 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:31.887142 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:32.136232 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:32.371158 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:32.385956 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:32.387161 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:32.636841 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:32.871225 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:32.887027 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:32.887478 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:33.137012 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:33.371471 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:33.385860 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:33.387450 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:33.636852 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:33.870774 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:33.884746 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:33.886839 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:34.138124 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:34.370623 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:34.385761 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:34.387397 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:34.637526 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:34.871065 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:34.884974 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:34.886952 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:35.136644 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:35.371370 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:35.385981 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:35.387707 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:35.637416 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:35.871317 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:35.884465 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:35.886881 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:36.136240 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:36.371300 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:36.387817 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:36.387868 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:36.656973 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:36.871150 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:36.887479 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:36.887842 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:37.138305 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:37.371939 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:37.388833 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:37.390117 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:37.636524 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:37.875754 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:37.886311 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:37.888043 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:38.136671 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:38.378165 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:38.387925 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:38.389937 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:38.636607 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:38.870581 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:38.884745 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:38.887604 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:39.136535 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:39.370814 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:39.384866 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:39.386762 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:39.637879 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:39.871435 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:39.885449 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:39.886949 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:40.137182 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:40.371194 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:40.386454 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:40.386806 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:40.638328 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:40.871652 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:40.972688 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:40.973301 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:41.136453 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:41.371611 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:41.385068 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:41.387588 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:41.637261 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:41.871537 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:41.884190 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:41.886862 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:42.142900 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:42.372220 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:42.388517 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:42.394260 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:42.636806 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:42.870530 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:42.884776 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:42.887074 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:43.137114 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:43.371295 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:43.386891 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:43.388231 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:43.636586 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:43.871780 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:43.887685 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:43.888179 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:44.136248 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:44.370774 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:44.384675 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:44.387124 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:44.636430 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:44.875112 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:44.899630 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:44.912328 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:45.139207 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:45.374005 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:45.386692 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:45.388744 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:45.637709 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:45.870837 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:45.884771 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:45.886498 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:46.137629 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:46.371646 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:46.385756 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:46.387322 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:46.636701 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:46.872072 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:46.886395 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:46.887873 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:47.136515 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:47.371555 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:47.385639 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:47.386908 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:47.636857 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:47.871706 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:47.885172 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:47.887782 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:48.137179 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:48.371131 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:48.386418 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:48.386623 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:48.636859 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:48.870880 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:48.886303 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:48.887148 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:49.136705 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:49.371310 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:49.385389 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:49.386834 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:49.636835 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:49.871064 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:49.886238 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:49.887926 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:50.136625 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:50.370420 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:50.384347 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:50.386059 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:50.636831 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:50.871761 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:50.885273 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:50.974255 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:51.136884 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:51.371561 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:51.384450 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:51.386641 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:51.637199 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:51.876440 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:51.884837 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:51.892935 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:52.137405 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:52.371747 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:52.385637 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:52.387076 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:52.636878 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:52.886264 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:52.888259 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:52.891729 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:53.137746 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:53.370615 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:53.384862 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:53.387188 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:53.637061 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:53.872502 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:53.886530 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:53.886705 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:54.136960 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:54.371712 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:54.385416 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:54.387937 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:54.638888 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:54.876499 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:54.887532 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:54.887661 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:55.166716 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:55.371332 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:55.385365 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:55.385921 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:55.637842 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:55.871663 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:55.972802 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:55.973006 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:56.138100 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:56.371378 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:56.385842 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:56.387706 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:56.637739 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:56.872528 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:56.886108 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:56.886863 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:57.137744 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:57.370738 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:57.384865 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:57.387345 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:57.636502 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:57.871794 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:57.885405 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:57.886381 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:58.136598 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:58.370751 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:58.386312 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:58.386407 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:58.637141 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:58.871736 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:58.886360 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:58.890609 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:59.137730 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:59.374552 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:59.390073 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:59.390492 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:59.637857 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:59.870851 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:59.885221 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:59.886707 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:00.145499 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:00.372628 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:00.387778 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:00.388436 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:00.636659 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:00.871734 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:00.884976 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:00.887076 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:01.138300 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:01.371375 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:01.384949 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:01.388932 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:01.637581 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:01.871599 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:01.885665 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:01.887240 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:02.136494 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:02.372457 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:02.385550 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:02.388880 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:02.637801 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:02.872431 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:02.885313 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:02.887127 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:03.136576 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:03.371586 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:03.387516 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:03.390009 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:03.637303 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:03.871838 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:03.886602 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:03.887790 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:04.137872 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:04.371003 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:04.386617 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:04.387078 1882225 kapi.go:107] duration metric: took 48.503885616s to wait for kubernetes.io/minikube-addons=registry ...
	I1222 00:11:04.641880 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:04.871899 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:04.885465 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:05.138146 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:05.371531 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:05.385024 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:05.637181 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:05.871692 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:05.884967 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:06.137765 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:06.371743 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:06.385003 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:06.638157 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:06.874493 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:06.886828 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:07.140149 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:07.372324 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:07.386269 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:07.637006 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:07.870615 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:07.884527 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:08.137753 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:08.371876 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:08.385080 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:08.636463 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:08.872469 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:08.885506 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:09.137724 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:09.371090 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:09.385293 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:09.640349 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:09.872899 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:09.885123 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:10.143126 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:10.371848 1882225 kapi.go:107] duration metric: took 51.004007988s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1222 00:11:10.374748 1882225 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-568394 cluster.
	I1222 00:11:10.377156 1882225 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1222 00:11:10.380081 1882225 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1222 00:11:10.385276 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:10.637581 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:10.886014 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:11.137535 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:11.385838 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:11.637591 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:11.884943 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:12.137849 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:12.385277 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:12.636691 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:12.886986 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:13.137754 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:13.385280 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:13.639905 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:13.888603 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:14.143319 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:14.385218 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:14.636987 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:14.885449 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:15.137156 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:15.385131 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:15.636674 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:15.886038 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:16.137037 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:16.385585 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:16.637637 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:16.885283 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:17.136924 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:17.384919 1882225 kapi.go:107] duration metric: took 1m1.503539015s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1222 00:11:17.636933 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:18.138079 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:18.636461 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:19.137527 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:19.637908 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:20.137368 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:20.636881 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:21.137248 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:21.638444 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:22.137785 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:22.636279 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:23.137237 1882225 kapi.go:107] duration metric: took 1m7.004153269s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1222 00:11:23.140336 1882225 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, inspektor-gadget, registry-creds, ingress-dns, cloud-spanner, amd-gpu-device-plugin, default-storageclass, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1222 00:11:23.143154 1882225 addons.go:530] duration metric: took 1m14.548084557s for enable addons: enabled=[nvidia-device-plugin storage-provisioner inspektor-gadget registry-creds ingress-dns cloud-spanner amd-gpu-device-plugin default-storageclass metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1222 00:11:23.143219 1882225 start.go:247] waiting for cluster config update ...
	I1222 00:11:23.143243 1882225 start.go:256] writing updated cluster config ...
	I1222 00:11:23.143560 1882225 ssh_runner.go:195] Run: rm -f paused
	I1222 00:11:23.149433 1882225 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 00:11:23.153956 1882225 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6hqzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.161073 1882225 pod_ready.go:94] pod "coredns-66bc5c9577-6hqzq" is "Ready"
	I1222 00:11:23.161156 1882225 pod_ready.go:86] duration metric: took 7.168539ms for pod "coredns-66bc5c9577-6hqzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.163671 1882225 pod_ready.go:83] waiting for pod "etcd-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.169038 1882225 pod_ready.go:94] pod "etcd-addons-568394" is "Ready"
	I1222 00:11:23.169102 1882225 pod_ready.go:86] duration metric: took 5.409009ms for pod "etcd-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.172559 1882225 pod_ready.go:83] waiting for pod "kube-apiserver-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.179549 1882225 pod_ready.go:94] pod "kube-apiserver-addons-568394" is "Ready"
	I1222 00:11:23.179577 1882225 pod_ready.go:86] duration metric: took 6.991511ms for pod "kube-apiserver-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.183896 1882225 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.553836 1882225 pod_ready.go:94] pod "kube-controller-manager-addons-568394" is "Ready"
	I1222 00:11:23.553872 1882225 pod_ready.go:86] duration metric: took 369.945153ms for pod "kube-controller-manager-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.753835 1882225 pod_ready.go:83] waiting for pod "kube-proxy-7rf5z" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:24.153436 1882225 pod_ready.go:94] pod "kube-proxy-7rf5z" is "Ready"
	I1222 00:11:24.153463 1882225 pod_ready.go:86] duration metric: took 399.598196ms for pod "kube-proxy-7rf5z" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:24.353872 1882225 pod_ready.go:83] waiting for pod "kube-scheduler-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:24.752894 1882225 pod_ready.go:94] pod "kube-scheduler-addons-568394" is "Ready"
	I1222 00:11:24.752924 1882225 pod_ready.go:86] duration metric: took 399.021317ms for pod "kube-scheduler-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:24.752939 1882225 pod_ready.go:40] duration metric: took 1.60347193s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 00:11:24.812957 1882225 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 00:11:24.816611 1882225 out.go:179] * Done! kubectl is now configured to use "addons-568394" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 00:14:05 addons-568394 crio[826]: time="2025-12-22T00:14:05.869049237Z" level=info msg="Removed container e82c169b56c6eed54e5bcbba24944d1f6355a1c749cd9751a25fe59c3ae10c99: kube-system/registry-creds-764b6fb674-nbrkf/registry-creds" id=f9092c28-e1ef-4ec6-8955-c2c0c173da38 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.834199185Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-9stdn/POD" id=ecc26fc7-bec9-4b45-a9e8-49424d44afca name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.83427394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.843546719Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9stdn Namespace:default ID:4db475781d12216c396dd6e9857e1ea99c0e5abcb5e59e2b31193156380c594e UID:2d5940b0-c097-4553-9796-7f846a0e6297 NetNS:/var/run/netns/5acec3cf-cc3e-4868-a9e3-b44252078dd3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000126be8}] Aliases:map[]}"
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.843585996Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-9stdn to CNI network \"kindnet\" (type=ptp)"
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.859118178Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9stdn Namespace:default ID:4db475781d12216c396dd6e9857e1ea99c0e5abcb5e59e2b31193156380c594e UID:2d5940b0-c097-4553-9796-7f846a0e6297 NetNS:/var/run/netns/5acec3cf-cc3e-4868-a9e3-b44252078dd3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000126be8}] Aliases:map[]}"
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.859324038Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-9stdn for CNI network kindnet (type=ptp)"
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.865444294Z" level=info msg="Ran pod sandbox 4db475781d12216c396dd6e9857e1ea99c0e5abcb5e59e2b31193156380c594e with infra container: default/hello-world-app-5d498dc89-9stdn/POD" id=ecc26fc7-bec9-4b45-a9e8-49424d44afca name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.868693758Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d324ed08-aa91-43cd-8642-fe0638a1e70b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.868853269Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d324ed08-aa91-43cd-8642-fe0638a1e70b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.868901005Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=d324ed08-aa91-43cd-8642-fe0638a1e70b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.871836321Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c3049505-bdd4-4b3d-b71d-a50203af3c14 name=/runtime.v1.ImageService/PullImage
	Dec 22 00:14:23 addons-568394 crio[826]: time="2025-12-22T00:14:23.878038322Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.435807865Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=c3049505-bdd4-4b3d-b71d-a50203af3c14 name=/runtime.v1.ImageService/PullImage
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.436370072Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7d29bb14-d9e8-401d-8305-9562f1109f60 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.437919877Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5659a637-5707-4b4b-8341-ff2da786dfe0 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.443839211Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-9stdn/hello-world-app" id=efb81bcd-b88c-44dd-925b-5c9d93e64a8e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.444147108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.462195294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.462687325Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1c4ab23287eef2f1077989ccb754511df3bd3d74e9f0771220e2cce38c400ea1/merged/etc/passwd: no such file or directory"
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.462798517Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1c4ab23287eef2f1077989ccb754511df3bd3d74e9f0771220e2cce38c400ea1/merged/etc/group: no such file or directory"
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.463157596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.483857774Z" level=info msg="Created container 50132803ffeca941545239d4fd1be277b56c3833a971693d9c2ad5fbb334ac3d: default/hello-world-app-5d498dc89-9stdn/hello-world-app" id=efb81bcd-b88c-44dd-925b-5c9d93e64a8e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.485110365Z" level=info msg="Starting container: 50132803ffeca941545239d4fd1be277b56c3833a971693d9c2ad5fbb334ac3d" id=e3eecf8d-f156-4705-84d3-cc2fb68bd1ee name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 00:14:24 addons-568394 crio[826]: time="2025-12-22T00:14:24.491175763Z" level=info msg="Started container" PID=7196 containerID=50132803ffeca941545239d4fd1be277b56c3833a971693d9c2ad5fbb334ac3d description=default/hello-world-app-5d498dc89-9stdn/hello-world-app id=e3eecf8d-f156-4705-84d3-cc2fb68bd1ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=4db475781d12216c396dd6e9857e1ea99c0e5abcb5e59e2b31193156380c594e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	50132803ffeca       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   4db475781d122       hello-world-app-5d498dc89-9stdn             default
	c04fb9917702a       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             19 seconds ago           Exited              registry-creds                           4                   55635f358433c       registry-creds-764b6fb674-nbrkf             kube-system
	ec376988050da       962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67                                                                             2 minutes ago            Running             nginx                                    0                   c854e631f5d2b       nginx                                       default
	84507c5d4f38e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   25298aed1928d       busybox                                     default
	ba6d2a10935cf       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	c21e1cad727bb       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	4a3875b2cbd08       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	253f713cf37c0       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	99e3eb4d57897       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	09131f4bbc07b       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   c0ab02560089d       ingress-nginx-controller-85d4c799dd-8ck2d   ingress-nginx
	27eddd74d2eec       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   971c90ee9782d       gcp-auth-78565c9fb4-72kzj                   gcp-auth
	f19b94452f671       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   0d3e34333840d       gadget-5ghgc                                gadget
	aebcc07cf8334       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    2                   5e6dfe9b50817       ingress-nginx-admission-patch-vgv9k         ingress-nginx
	86a55a63efb3c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   372d3e8a0451e       registry-proxy-nsh85                        kube-system
	71380ae3b449a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	fc3fdb6c93d05       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   ca826af535a34       nvidia-device-plugin-daemonset-j6xlv        kube-system
	c803faf1630b1       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   5da8f766f6bec       csi-hostpath-resizer-0                      kube-system
	ad2b295ef64de       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   75a67ca4886be       ingress-nginx-admission-create-d4zxr        ingress-nginx
	0e06647d0f9bc       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   cf31b2b38c46b       local-path-provisioner-648f6765c9-csq79     local-path-storage
	1036aaa1604dc       docker.io/marcnuri/yakd@sha256:0b7e831df7fe4ad1c8c56a736a8d66bd86e243f6777d3c512ead47199d8fbe1a                                              3 minutes ago            Running             yakd                                     0                   b692485a18f65       yakd-dashboard-6654c87f9b-t69qw             yakd-dashboard
	234210792d937       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   3444b4de34f78       csi-hostpath-attacher-0                     kube-system
	c0562ac57cc54       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   cc05c52d1f0a6       snapshot-controller-7d9fbc56b8-bkzz5        kube-system
	efde6ed9bfd43       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               3 minutes ago            Running             cloud-spanner-emulator                   0                   5a70916445b54       cloud-spanner-emulator-85df47b6f4-f487h     default
	acf18d680a9bd       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   468ea053da588       snapshot-controller-7d9fbc56b8-dh27b        kube-system
	e8ebac07e8c89       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   00973a74f6add       registry-6b586f9694-g9bl2                   kube-system
	8f1fd9842de20       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   253f71bef8e56       kube-ingress-dns-minikube                   kube-system
	85e4e89412a1d       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   63385695633dc       metrics-server-85b7d694d7-gdkf4             kube-system
	7af29b5bafd6d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   5596da7043b7d       storage-provisioner                         kube-system
	41f5a4e6951c9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   9d8d64c37a8e6       coredns-66bc5c9577-6hqzq                    kube-system
	2d9448f792111       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           4 minutes ago            Running             kindnet-cni                              0                   1e424b7a993ad       kindnet-qsbwv                               kube-system
	9f23e3541811b       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                                                             4 minutes ago            Running             kube-proxy                               0                   e4c991bbca414       kube-proxy-7rf5z                            kube-system
	dd0d9f62338da       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   7101563766f20       etcd-addons-568394                          kube-system
	b917df9a6d567       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                                                             4 minutes ago            Running             kube-scheduler                           0                   af4cea1302c88       kube-scheduler-addons-568394                kube-system
	12de3313f756f       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                                                             4 minutes ago            Running             kube-controller-manager                  0                   ee6aa3d61e3e8       kube-controller-manager-addons-568394       kube-system
	6f8ad0683d747       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                                                             4 minutes ago            Running             kube-apiserver                           0                   1a37bbe02959f       kube-apiserver-addons-568394                kube-system
	
	
	==> coredns [41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3] <==
	[INFO] 10.244.0.15:60423 - 7819 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002215345s
	[INFO] 10.244.0.15:60423 - 12153 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000114048s
	[INFO] 10.244.0.15:60423 - 63023 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000135856s
	[INFO] 10.244.0.15:44607 - 58889 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158937s
	[INFO] 10.244.0.15:44607 - 59359 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160086s
	[INFO] 10.244.0.15:44963 - 51559 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000111398s
	[INFO] 10.244.0.15:44963 - 51370 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000173279s
	[INFO] 10.244.0.15:57590 - 59680 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094742s
	[INFO] 10.244.0.15:57590 - 59467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000161291s
	[INFO] 10.244.0.15:34870 - 14412 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001560989s
	[INFO] 10.244.0.15:34870 - 14215 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001602038s
	[INFO] 10.244.0.15:52948 - 1241 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012944s
	[INFO] 10.244.0.15:52948 - 862 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096235s
	[INFO] 10.244.0.20:35571 - 43331 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181213s
	[INFO] 10.244.0.20:47221 - 1554 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000121949s
	[INFO] 10.244.0.20:46207 - 28695 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135979s
	[INFO] 10.244.0.20:54819 - 7942 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084239s
	[INFO] 10.244.0.20:36404 - 64542 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139262s
	[INFO] 10.244.0.20:56954 - 53259 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090796s
	[INFO] 10.244.0.20:40496 - 22447 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001875926s
	[INFO] 10.244.0.20:45674 - 64836 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001699479s
	[INFO] 10.244.0.20:46446 - 50692 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003444799s
	[INFO] 10.244.0.20:34162 - 26165 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001584931s
	[INFO] 10.244.0.24:58693 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00027223s
	[INFO] 10.244.0.24:46497 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161481s
	
	
	==> describe nodes <==
	Name:               addons-568394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-568394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=addons-568394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T00_10_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-568394
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-568394"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 00:09:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-568394
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 00:14:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 00:13:17 +0000   Mon, 22 Dec 2025 00:09:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 00:13:17 +0000   Mon, 22 Dec 2025 00:09:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 00:13:17 +0000   Mon, 22 Dec 2025 00:09:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 00:13:17 +0000   Mon, 22 Dec 2025 00:10:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-568394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                b8b7ef60-15bb-4f25-89c0-52633a96bb5d
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-85df47b6f4-f487h      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  default                     hello-world-app-5d498dc89-9stdn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-5ghgc                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  gcp-auth                    gcp-auth-78565c9fb4-72kzj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-8ck2d    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m10s
	  kube-system                 coredns-66bc5c9577-6hqzq                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m17s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 csi-hostpathplugin-26hr6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-addons-568394                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m25s
	  kube-system                 kindnet-qsbwv                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m17s
	  kube-system                 kube-apiserver-addons-568394                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-addons-568394        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-7rf5z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-addons-568394                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 metrics-server-85b7d694d7-gdkf4              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m11s
	  kube-system                 nvidia-device-plugin-daemonset-j6xlv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-6b586f9694-g9bl2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 registry-creds-764b6fb674-nbrkf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 registry-proxy-nsh85                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-bkzz5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 snapshot-controller-7d9fbc56b8-dh27b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  local-path-storage          local-path-provisioner-648f6765c9-csq79      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-t69qw              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m15s                  kube-proxy       
	  Normal   Starting                 4m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m31s (x9 over 4m31s)  kubelet          Node addons-568394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m31s (x8 over 4m31s)  kubelet          Node addons-568394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m31s (x7 over 4m31s)  kubelet          Node addons-568394 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m23s                  kubelet          Node addons-568394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m23s                  kubelet          Node addons-568394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m23s                  kubelet          Node addons-568394 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m18s                  node-controller  Node addons-568394 event: Registered Node addons-568394 in Controller
	  Normal   NodeReady                4m2s                   kubelet          Node addons-568394 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f] <==
	{"level":"warn","ts":"2025-12-22T00:09:58.145273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.172119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.197590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.227372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.262394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.282341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.306487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.340718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.365593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.388344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.416364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.444493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.487288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.505191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.547633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.584222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.612396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.629971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.781512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:16.406553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:16.420954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:37.503805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:37.558089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:37.582733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:37.604835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40564","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [27eddd74d2eecf79b0ba3e1b77abc25315582f513f8986e552f5f6e7ca7e0f55] <==
	2025/12/22 00:11:09 GCP Auth Webhook started!
	2025/12/22 00:11:25 Ready to marshal response ...
	2025/12/22 00:11:25 Ready to write response ...
	2025/12/22 00:11:25 Ready to marshal response ...
	2025/12/22 00:11:25 Ready to write response ...
	2025/12/22 00:11:25 Ready to marshal response ...
	2025/12/22 00:11:25 Ready to write response ...
	2025/12/22 00:11:44 Ready to marshal response ...
	2025/12/22 00:11:44 Ready to write response ...
	2025/12/22 00:11:46 Ready to marshal response ...
	2025/12/22 00:11:46 Ready to write response ...
	2025/12/22 00:12:03 Ready to marshal response ...
	2025/12/22 00:12:03 Ready to write response ...
	2025/12/22 00:12:04 Ready to marshal response ...
	2025/12/22 00:12:04 Ready to write response ...
	2025/12/22 00:12:26 Ready to marshal response ...
	2025/12/22 00:12:26 Ready to write response ...
	2025/12/22 00:12:26 Ready to marshal response ...
	2025/12/22 00:12:26 Ready to write response ...
	2025/12/22 00:12:34 Ready to marshal response ...
	2025/12/22 00:12:34 Ready to write response ...
	2025/12/22 00:14:23 Ready to marshal response ...
	2025/12/22 00:14:23 Ready to write response ...
	
	
	==> kernel <==
	 00:14:25 up 1 day,  6:56,  0 user,  load average: 0.66, 1.87, 1.73
	Linux addons-568394 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e] <==
	I1222 00:12:22.829496       1 main.go:301] handling current node
	I1222 00:12:32.825096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:12:32.825132       1 main.go:301] handling current node
	I1222 00:12:42.825571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:12:42.825606       1 main.go:301] handling current node
	I1222 00:12:52.829893       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:12:52.829927       1 main.go:301] handling current node
	I1222 00:13:02.829719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:13:02.829755       1 main.go:301] handling current node
	I1222 00:13:12.825096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:13:12.825933       1 main.go:301] handling current node
	I1222 00:13:22.827967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:13:22.828007       1 main.go:301] handling current node
	I1222 00:13:32.826134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:13:32.826172       1 main.go:301] handling current node
	I1222 00:13:42.832639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:13:42.832675       1 main.go:301] handling current node
	I1222 00:13:52.827946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:13:52.827982       1 main.go:301] handling current node
	I1222 00:14:02.825077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:14:02.825112       1 main.go:301] handling current node
	I1222 00:14:12.825099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:14:12.825140       1 main.go:301] handling current node
	I1222 00:14:22.826128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:14:22.826161       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7] <==
	W1222 00:10:23.487359       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.210.109:443: connect: connection refused
	E1222 00:10:23.487461       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.210.109:443: connect: connection refused" logger="UnhandledError"
	W1222 00:10:23.488858       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.210.109:443: connect: connection refused
	E1222 00:10:23.491639       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.210.109:443: connect: connection refused" logger="UnhandledError"
	W1222 00:10:23.582684       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.210.109:443: connect: connection refused
	E1222 00:10:23.582723       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.210.109:443: connect: connection refused" logger="UnhandledError"
	E1222 00:10:30.783253       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.74.117:443: connect: connection refused" logger="UnhandledError"
	W1222 00:10:30.783429       1 handler_proxy.go:99] no RequestInfo found in the context
	E1222 00:10:30.783481       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1222 00:10:30.784100       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.74.117:443: connect: connection refused" logger="UnhandledError"
	E1222 00:10:30.789967       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.74.117:443: connect: connection refused" logger="UnhandledError"
	I1222 00:10:30.918392       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1222 00:10:37.503089       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1222 00:10:37.555829       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1222 00:10:37.582283       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1222 00:10:37.602456       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1222 00:11:54.228207       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1222 00:11:56.309760       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1222 00:12:04.775479       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1222 00:12:05.084109       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.46.218"}
	E1222 00:12:12.395935       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1222 00:14:23.706006       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.64.87"}
	
	
	==> kube-controller-manager [12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1] <==
	I1222 00:10:07.528338       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 00:10:07.528359       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1222 00:10:07.528368       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1222 00:10:07.528702       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1222 00:10:07.528768       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1222 00:10:07.528841       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-568394"
	I1222 00:10:07.528874       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1222 00:10:07.529513       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1222 00:10:07.529822       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1222 00:10:07.529954       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1222 00:10:07.529973       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1222 00:10:07.530100       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1222 00:10:07.530365       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1222 00:10:07.530748       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 00:10:07.530809       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1222 00:10:07.530809       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1222 00:10:07.531339       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1222 00:10:07.534926       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 00:10:07.536438       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1222 00:10:27.532607       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1222 00:10:37.493317       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1222 00:10:37.493397       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1222 00:10:37.533740       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1222 00:10:37.594452       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 00:10:37.634808       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634] <==
	I1222 00:10:09.574751       1 server_linux.go:53] "Using iptables proxy"
	I1222 00:10:09.731105       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 00:10:09.831386       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 00:10:09.831416       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1222 00:10:09.831485       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 00:10:09.867023       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 00:10:09.867076       1 server_linux.go:132] "Using iptables Proxier"
	I1222 00:10:09.932740       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 00:10:09.933041       1 server.go:527] "Version info" version="v1.34.3"
	I1222 00:10:09.933064       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 00:10:09.956460       1 config.go:200] "Starting service config controller"
	I1222 00:10:09.962515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 00:10:09.962590       1 config.go:106] "Starting endpoint slice config controller"
	I1222 00:10:09.962597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 00:10:09.962609       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 00:10:09.962613       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 00:10:09.963261       1 config.go:309] "Starting node config controller"
	I1222 00:10:09.963269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 00:10:09.963275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 00:10:10.063143       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 00:10:10.063180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1222 00:10:10.063415       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f] <==
	I1222 00:10:00.804239       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 00:10:00.806728       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 00:10:00.806822       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 00:10:00.807744       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 00:10:00.807825       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1222 00:10:00.810886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1222 00:10:00.817570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1222 00:10:00.818075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1222 00:10:00.818177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1222 00:10:00.818254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1222 00:10:00.818350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1222 00:10:00.818417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 00:10:00.818490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1222 00:10:00.818565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1222 00:10:00.818637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1222 00:10:00.818705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1222 00:10:00.818772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1222 00:10:00.823460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1222 00:10:00.823587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1222 00:10:00.823693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1222 00:10:00.823874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1222 00:10:00.824013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 00:10:00.824115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1222 00:10:00.824208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1222 00:10:02.107353       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 22 00:13:28 addons-568394 kubelet[1282]: I1222 00:13:28.450459    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j6xlv" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:13:32 addons-568394 kubelet[1282]: I1222 00:13:32.451780    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nbrkf" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:13:32 addons-568394 kubelet[1282]: I1222 00:13:32.451858    1282 scope.go:117] "RemoveContainer" containerID="e82c169b56c6eed54e5bcbba24944d1f6355a1c749cd9751a25fe59c3ae10c99"
	Dec 22 00:13:32 addons-568394 kubelet[1282]: E1222 00:13:32.452070    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-nbrkf_kube-system(e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff)\"" pod="kube-system/registry-creds-764b6fb674-nbrkf" podUID="e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff"
	Dec 22 00:13:43 addons-568394 kubelet[1282]: I1222 00:13:43.450688    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nbrkf" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:13:43 addons-568394 kubelet[1282]: I1222 00:13:43.450761    1282 scope.go:117] "RemoveContainer" containerID="e82c169b56c6eed54e5bcbba24944d1f6355a1c749cd9751a25fe59c3ae10c99"
	Dec 22 00:13:43 addons-568394 kubelet[1282]: E1222 00:13:43.450957    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-nbrkf_kube-system(e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff)\"" pod="kube-system/registry-creds-764b6fb674-nbrkf" podUID="e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff"
	Dec 22 00:13:54 addons-568394 kubelet[1282]: I1222 00:13:54.449823    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nbrkf" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:13:54 addons-568394 kubelet[1282]: I1222 00:13:54.451110    1282 scope.go:117] "RemoveContainer" containerID="e82c169b56c6eed54e5bcbba24944d1f6355a1c749cd9751a25fe59c3ae10c99"
	Dec 22 00:13:54 addons-568394 kubelet[1282]: E1222 00:13:54.451508    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-nbrkf_kube-system(e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff)\"" pod="kube-system/registry-creds-764b6fb674-nbrkf" podUID="e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff"
	Dec 22 00:13:55 addons-568394 kubelet[1282]: I1222 00:13:55.450068    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nsh85" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:14:02 addons-568394 kubelet[1282]: E1222 00:14:02.554816    1282 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/16ae721e07e70abfa8c8bb777eb6029ac62557b6b079c3670c7f2aca9ecff7b8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/16ae721e07e70abfa8c8bb777eb6029ac62557b6b079c3670c7f2aca9ecff7b8/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_registry-creds-764b6fb674-nbrkf_e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff/registry-creds/2.log" to get inode usage: stat /var/log/pods/kube-system_registry-creds-764b6fb674-nbrkf_e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff/registry-creds/2.log: no such file or directory
	Dec 22 00:14:05 addons-568394 kubelet[1282]: I1222 00:14:05.450600    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nbrkf" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:14:05 addons-568394 kubelet[1282]: I1222 00:14:05.451173    1282 scope.go:117] "RemoveContainer" containerID="e82c169b56c6eed54e5bcbba24944d1f6355a1c749cd9751a25fe59c3ae10c99"
	Dec 22 00:14:05 addons-568394 kubelet[1282]: I1222 00:14:05.835273    1282 scope.go:117] "RemoveContainer" containerID="e82c169b56c6eed54e5bcbba24944d1f6355a1c749cd9751a25fe59c3ae10c99"
	Dec 22 00:14:05 addons-568394 kubelet[1282]: I1222 00:14:05.835596    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nbrkf" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:14:05 addons-568394 kubelet[1282]: I1222 00:14:05.835634    1282 scope.go:117] "RemoveContainer" containerID="c04fb9917702ab74c4613058c79ce2df8f67ebe35f6bef41530cdc1d2cbef7b4"
	Dec 22 00:14:05 addons-568394 kubelet[1282]: E1222 00:14:05.835783    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-nbrkf_kube-system(e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff)\"" pod="kube-system/registry-creds-764b6fb674-nbrkf" podUID="e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff"
	Dec 22 00:14:19 addons-568394 kubelet[1282]: I1222 00:14:19.450512    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nbrkf" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:14:19 addons-568394 kubelet[1282]: I1222 00:14:19.450592    1282 scope.go:117] "RemoveContainer" containerID="c04fb9917702ab74c4613058c79ce2df8f67ebe35f6bef41530cdc1d2cbef7b4"
	Dec 22 00:14:19 addons-568394 kubelet[1282]: E1222 00:14:19.450767    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-nbrkf_kube-system(e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff)\"" pod="kube-system/registry-creds-764b6fb674-nbrkf" podUID="e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff"
	Dec 22 00:14:23 addons-568394 kubelet[1282]: I1222 00:14:23.626130    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l775n\" (UniqueName: \"kubernetes.io/projected/2d5940b0-c097-4553-9796-7f846a0e6297-kube-api-access-l775n\") pod \"hello-world-app-5d498dc89-9stdn\" (UID: \"2d5940b0-c097-4553-9796-7f846a0e6297\") " pod="default/hello-world-app-5d498dc89-9stdn"
	Dec 22 00:14:23 addons-568394 kubelet[1282]: I1222 00:14:23.626689    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2d5940b0-c097-4553-9796-7f846a0e6297-gcp-creds\") pod \"hello-world-app-5d498dc89-9stdn\" (UID: \"2d5940b0-c097-4553-9796-7f846a0e6297\") " pod="default/hello-world-app-5d498dc89-9stdn"
	Dec 22 00:14:23 addons-568394 kubelet[1282]: W1222 00:14:23.862113    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/crio-4db475781d12216c396dd6e9857e1ea99c0e5abcb5e59e2b31193156380c594e WatchSource:0}: Error finding container 4db475781d12216c396dd6e9857e1ea99c0e5abcb5e59e2b31193156380c594e: Status 404 returned error can't find the container with id 4db475781d12216c396dd6e9857e1ea99c0e5abcb5e59e2b31193156380c594e
	Dec 22 00:14:24 addons-568394 kubelet[1282]: I1222 00:14:24.941177    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-9stdn" podStartSLOduration=1.3729827399999999 podStartE2EDuration="1.941158526s" podCreationTimestamp="2025-12-22 00:14:23 +0000 UTC" firstStartedPulling="2025-12-22 00:14:23.869113873 +0000 UTC m=+261.615010156" lastFinishedPulling="2025-12-22 00:14:24.437289659 +0000 UTC m=+262.183185942" observedRunningTime="2025-12-22 00:14:24.94066748 +0000 UTC m=+262.686563771" watchObservedRunningTime="2025-12-22 00:14:24.941158526 +0000 UTC m=+262.687054809"
	
	
	==> storage-provisioner [7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5] <==
	W1222 00:14:01.639475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:03.642946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:03.647394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:05.651241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:05.655440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:07.658564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:07.663423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:09.666136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:09.670620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:11.673618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:11.680197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:13.682834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:13.686956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:15.690443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:15.695016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:17.698026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:17.704452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:19.706942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:19.710992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:21.714047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:21.720568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:23.726687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:23.736316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:25.741426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:14:25.753638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-568394 -n addons-568394
helpers_test.go:270: (dbg) Run:  kubectl --context addons-568394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-d4zxr ingress-nginx-admission-patch-vgv9k
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-568394 describe pod ingress-nginx-admission-create-d4zxr ingress-nginx-admission-patch-vgv9k
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-568394 describe pod ingress-nginx-admission-create-d4zxr ingress-nginx-admission-patch-vgv9k: exit status 1 (84.73741ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d4zxr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vgv9k" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-568394 describe pod ingress-nginx-admission-create-d4zxr ingress-nginx-admission-patch-vgv9k: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (260.315069ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:14:26.688890 1891758 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:14:26.690318 1891758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:14:26.690365 1891758 out.go:374] Setting ErrFile to fd 2...
	I1222 00:14:26.690386 1891758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:14:26.690703 1891758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:14:26.691003 1891758 mustload.go:66] Loading cluster: addons-568394
	I1222 00:14:26.691402 1891758 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:14:26.691457 1891758 addons.go:622] checking whether the cluster is paused
	I1222 00:14:26.691585 1891758 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:14:26.691618 1891758 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:14:26.692151 1891758 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:14:26.709850 1891758 ssh_runner.go:195] Run: systemctl --version
	I1222 00:14:26.709939 1891758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:14:26.727734 1891758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:14:26.823757 1891758 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:14:26.823849 1891758 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:14:26.855004 1891758 cri.go:96] found id: "c04fb9917702ab74c4613058c79ce2df8f67ebe35f6bef41530cdc1d2cbef7b4"
	I1222 00:14:26.855076 1891758 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:14:26.855096 1891758 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:14:26.855116 1891758 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:14:26.855144 1891758 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:14:26.855164 1891758 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:14:26.855180 1891758 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:14:26.855197 1891758 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:14:26.855215 1891758 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:14:26.855250 1891758 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:14:26.855272 1891758 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:14:26.855289 1891758 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:14:26.855306 1891758 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:14:26.855323 1891758 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:14:26.855349 1891758 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:14:26.855381 1891758 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:14:26.855397 1891758 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:14:26.855415 1891758 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:14:26.855432 1891758 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:14:26.855459 1891758 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:14:26.855487 1891758 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:14:26.855504 1891758 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:14:26.855521 1891758 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:14:26.855538 1891758 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:14:26.855566 1891758 cri.go:96] found id: ""
	I1222 00:14:26.855636 1891758 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:14:26.872554 1891758 out.go:203] 
	W1222 00:14:26.875499 1891758 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:14:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:14:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:14:26.875543 1891758 out.go:285] * 
	* 
	W1222 00:14:26.893892 1891758 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:14:26.897060 1891758 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable ingress --alsologtostderr -v=1: exit status 11 (295.077089ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:14:26.978929 1891801 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:14:26.979716 1891801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:14:26.979761 1891801 out.go:374] Setting ErrFile to fd 2...
	I1222 00:14:26.979783 1891801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:14:26.980066 1891801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:14:26.980430 1891801 mustload.go:66] Loading cluster: addons-568394
	I1222 00:14:26.980879 1891801 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:14:26.980925 1891801 addons.go:622] checking whether the cluster is paused
	I1222 00:14:26.981070 1891801 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:14:26.981106 1891801 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:14:26.981671 1891801 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:14:27.001862 1891801 ssh_runner.go:195] Run: systemctl --version
	I1222 00:14:27.001943 1891801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:14:27.022410 1891801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:14:27.119987 1891801 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:14:27.120078 1891801 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:14:27.151122 1891801 cri.go:96] found id: "c04fb9917702ab74c4613058c79ce2df8f67ebe35f6bef41530cdc1d2cbef7b4"
	I1222 00:14:27.151141 1891801 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:14:27.151147 1891801 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:14:27.151151 1891801 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:14:27.151154 1891801 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:14:27.151157 1891801 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:14:27.151160 1891801 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:14:27.151163 1891801 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:14:27.151167 1891801 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:14:27.151172 1891801 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:14:27.151176 1891801 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:14:27.151184 1891801 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:14:27.151188 1891801 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:14:27.151190 1891801 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:14:27.151193 1891801 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:14:27.151198 1891801 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:14:27.151201 1891801 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:14:27.151205 1891801 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:14:27.151208 1891801 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:14:27.151210 1891801 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:14:27.151215 1891801 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:14:27.151218 1891801 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:14:27.151221 1891801 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:14:27.151224 1891801 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:14:27.151227 1891801 cri.go:96] found id: ""
	I1222 00:14:27.151276 1891801 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:14:27.167018 1891801 out.go:203] 
	W1222 00:14:27.170025 1891801 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:14:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:14:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:14:27.170055 1891801 out.go:285] * 
	* 
	W1222 00:14:27.188148 1891801 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:14:27.191120 1891801 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (142.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-5ghgc" [452b1fe1-1668-4820-8cef-ff76504f7f14] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003985453s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (319.732251ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:12:04.205448 1889551 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:12:04.208066 1889551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:04.208131 1889551 out.go:374] Setting ErrFile to fd 2...
	I1222 00:12:04.208152 1889551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:04.208466 1889551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:12:04.208823 1889551 mustload.go:66] Loading cluster: addons-568394
	I1222 00:12:04.209291 1889551 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:04.209367 1889551 addons.go:622] checking whether the cluster is paused
	I1222 00:12:04.209507 1889551 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:04.209544 1889551 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:12:04.210089 1889551 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:12:04.235324 1889551 ssh_runner.go:195] Run: systemctl --version
	I1222 00:12:04.235381 1889551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:12:04.253702 1889551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:12:04.372877 1889551 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:12:04.372964 1889551 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:12:04.419871 1889551 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:12:04.419941 1889551 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:12:04.419961 1889551 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:12:04.419980 1889551 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:12:04.419999 1889551 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:12:04.420026 1889551 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:12:04.420045 1889551 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:12:04.420063 1889551 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:12:04.420080 1889551 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:12:04.420111 1889551 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:12:04.420134 1889551 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:12:04.420152 1889551 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:12:04.420169 1889551 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:12:04.420202 1889551 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:12:04.420219 1889551 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:12:04.420249 1889551 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:12:04.420267 1889551 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:12:04.420302 1889551 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:12:04.420320 1889551 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:12:04.420337 1889551 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:12:04.420374 1889551 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:12:04.420399 1889551 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:12:04.420418 1889551 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:12:04.420435 1889551 cri.go:96] found id: ""
	I1222 00:12:04.420534 1889551 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:12:04.436314 1889551 out.go:203] 
	W1222 00:12:04.439474 1889551 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:12:04.439499 1889551 out.go:285] * 
	* 
	W1222 00:12:04.457086 1889551 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:12:04.460213 1889551 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.322459ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003395346s
addons_test.go:465: (dbg) Run:  kubectl --context addons-568394 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (256.272564ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:11:57.928719 1889452 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:11:57.929467 1889452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:57.929516 1889452 out.go:374] Setting ErrFile to fd 2...
	I1222 00:11:57.929540 1889452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:57.930167 1889452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:11:57.930470 1889452 mustload.go:66] Loading cluster: addons-568394
	I1222 00:11:57.930852 1889452 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:57.930875 1889452 addons.go:622] checking whether the cluster is paused
	I1222 00:11:57.930984 1889452 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:57.931000 1889452 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:11:57.931543 1889452 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:11:57.948819 1889452 ssh_runner.go:195] Run: systemctl --version
	I1222 00:11:57.948870 1889452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:11:57.965838 1889452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:11:58.060157 1889452 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:11:58.060254 1889452 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:11:58.092049 1889452 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:11:58.092070 1889452 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:11:58.092076 1889452 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:11:58.092080 1889452 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:11:58.092083 1889452 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:11:58.092087 1889452 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:11:58.092091 1889452 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:11:58.092095 1889452 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:11:58.092098 1889452 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:11:58.092105 1889452 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:11:58.092108 1889452 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:11:58.092111 1889452 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:11:58.092114 1889452 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:11:58.092117 1889452 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:11:58.092120 1889452 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:11:58.092128 1889452 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:11:58.092132 1889452 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:11:58.092136 1889452 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:11:58.092140 1889452 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:11:58.092143 1889452 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:11:58.092147 1889452 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:11:58.092150 1889452 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:11:58.092153 1889452 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:11:58.092156 1889452 cri.go:96] found id: ""
	I1222 00:11:58.092209 1889452 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:11:58.110523 1889452 out.go:203] 
	W1222 00:11:58.113559 1889452 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:11:58.113584 1889452 out.go:285] * 
	* 
	W1222 00:11:58.132955 1889452 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:11:58.136020 1889452 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1222 00:11:38.714395 1881219 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1222 00:11:38.718114 1881219 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1222 00:11:38.718144 1881219 kapi.go:107] duration metric: took 7.835703ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.846157ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-568394 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-568394 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [c857d071-2c5b-4cb5-b526-31a09a601506] Pending
helpers_test.go:353: "task-pv-pod" [c857d071-2c5b-4cb5-b526-31a09a601506] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [c857d071-2c5b-4cb5-b526-31a09a601506] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.002877673s
addons_test.go:574: (dbg) Run:  kubectl --context addons-568394 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-568394 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-568394 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-568394 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-568394 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-568394 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-568394 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [09eb6ab6-12fd-4d3c-b9c4-5983ef4bb06d] Pending
helpers_test.go:353: "task-pv-pod-restore" [09eb6ab6-12fd-4d3c-b9c4-5983ef4bb06d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [09eb6ab6-12fd-4d3c-b9c4-5983ef4bb06d] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004148981s
addons_test.go:616: (dbg) Run:  kubectl --context addons-568394 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-568394 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-568394 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (262.362505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:12:12.843078 1890000 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:12:12.844540 1890000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:12.844563 1890000 out.go:374] Setting ErrFile to fd 2...
	I1222 00:12:12.844570 1890000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:12.844866 1890000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:12:12.845202 1890000 mustload.go:66] Loading cluster: addons-568394
	I1222 00:12:12.845650 1890000 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:12.845689 1890000 addons.go:622] checking whether the cluster is paused
	I1222 00:12:12.845835 1890000 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:12.845853 1890000 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:12:12.846401 1890000 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:12:12.862813 1890000 ssh_runner.go:195] Run: systemctl --version
	I1222 00:12:12.862865 1890000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:12:12.879721 1890000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:12:12.977401 1890000 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:12:12.977490 1890000 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:12:13.010725 1890000 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:12:13.010752 1890000 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:12:13.010765 1890000 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:12:13.010771 1890000 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:12:13.010774 1890000 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:12:13.010780 1890000 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:12:13.010800 1890000 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:12:13.010809 1890000 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:12:13.010812 1890000 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:12:13.010818 1890000 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:12:13.010823 1890000 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:12:13.010828 1890000 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:12:13.010832 1890000 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:12:13.010854 1890000 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:12:13.010870 1890000 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:12:13.010876 1890000 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:12:13.010880 1890000 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:12:13.010883 1890000 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:12:13.010886 1890000 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:12:13.010897 1890000 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:12:13.010902 1890000 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:12:13.010905 1890000 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:12:13.010929 1890000 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:12:13.010939 1890000 cri.go:96] found id: ""
	I1222 00:12:13.011015 1890000 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:12:13.029744 1890000 out.go:203] 
	W1222 00:12:13.032637 1890000 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:12:13.032664 1890000 out.go:285] * 
	* 
	W1222 00:12:13.051111 1890000 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:12:13.053968 1890000 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (278.00462ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:12:13.109360 1890048 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:12:13.110117 1890048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:13.110135 1890048 out.go:374] Setting ErrFile to fd 2...
	I1222 00:12:13.110141 1890048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:13.110402 1890048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:12:13.110683 1890048 mustload.go:66] Loading cluster: addons-568394
	I1222 00:12:13.111040 1890048 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:13.111062 1890048 addons.go:622] checking whether the cluster is paused
	I1222 00:12:13.111172 1890048 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:13.111188 1890048 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:12:13.111932 1890048 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:12:13.142544 1890048 ssh_runner.go:195] Run: systemctl --version
	I1222 00:12:13.142602 1890048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:12:13.159703 1890048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:12:13.256138 1890048 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:12:13.256232 1890048 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:12:13.290130 1890048 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:12:13.290151 1890048 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:12:13.290157 1890048 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:12:13.290162 1890048 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:12:13.290165 1890048 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:12:13.290169 1890048 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:12:13.290172 1890048 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:12:13.290175 1890048 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:12:13.290178 1890048 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:12:13.290186 1890048 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:12:13.290189 1890048 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:12:13.290192 1890048 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:12:13.290195 1890048 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:12:13.290199 1890048 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:12:13.290208 1890048 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:12:13.290213 1890048 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:12:13.290216 1890048 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:12:13.290220 1890048 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:12:13.290223 1890048 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:12:13.290227 1890048 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:12:13.290239 1890048 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:12:13.290248 1890048 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:12:13.290256 1890048 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:12:13.290259 1890048 cri.go:96] found id: ""
	I1222 00:12:13.290310 1890048 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:12:13.307513 1890048 out.go:203] 
	W1222 00:12:13.310510 1890048 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:12:13.310544 1890048 out.go:285] * 
	* 
	W1222 00:12:13.329291 1890048 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:12:13.332326 1890048 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (34.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-568394 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-568394 --alsologtostderr -v=1: exit status 11 (276.496788ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:11:35.632634 1888436 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:11:35.633929 1888436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:35.633973 1888436 out.go:374] Setting ErrFile to fd 2...
	I1222 00:11:35.633995 1888436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:35.634288 1888436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:11:35.634658 1888436 mustload.go:66] Loading cluster: addons-568394
	I1222 00:11:35.635070 1888436 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:35.635114 1888436 addons.go:622] checking whether the cluster is paused
	I1222 00:11:35.635248 1888436 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:35.635285 1888436 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:11:35.635952 1888436 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:11:35.653594 1888436 ssh_runner.go:195] Run: systemctl --version
	I1222 00:11:35.653647 1888436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:11:35.671903 1888436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:11:35.775828 1888436 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:11:35.775931 1888436 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:11:35.807147 1888436 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:11:35.807172 1888436 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:11:35.807186 1888436 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:11:35.807190 1888436 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:11:35.807194 1888436 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:11:35.807198 1888436 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:11:35.807201 1888436 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:11:35.807204 1888436 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:11:35.807207 1888436 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:11:35.807214 1888436 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:11:35.807218 1888436 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:11:35.807221 1888436 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:11:35.807225 1888436 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:11:35.807229 1888436 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:11:35.807232 1888436 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:11:35.807244 1888436 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:11:35.807265 1888436 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:11:35.807271 1888436 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:11:35.807275 1888436 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:11:35.807286 1888436 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:11:35.807291 1888436 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:11:35.807294 1888436 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:11:35.807298 1888436 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:11:35.807303 1888436 cri.go:96] found id: ""
	I1222 00:11:35.807362 1888436 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:11:35.824906 1888436 out.go:203] 
	W1222 00:11:35.827689 1888436 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:11:35.827760 1888436 out.go:285] * 
	* 
	W1222 00:11:35.846409 1888436 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:11:35.849544 1888436 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-568394 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-568394
helpers_test.go:244: (dbg) docker inspect addons-568394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1",
	        "Created": "2025-12-22T00:09:36.012946174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1882620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:09:36.080207369Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/hosts",
	        "LogPath": "/var/lib/docker/containers/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1/a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1-json.log",
	        "Name": "/addons-568394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-568394:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-568394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a413d8502110b54b51017f81a7d9a7a69eaa3379ae15c626c7451b83c033c0e1",
	                "LowerDir": "/var/lib/docker/overlay2/b98a7af9a8c55160d44123250135c599913de21055282536b8a8b33d976aabdb-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b98a7af9a8c55160d44123250135c599913de21055282536b8a8b33d976aabdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b98a7af9a8c55160d44123250135c599913de21055282536b8a8b33d976aabdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b98a7af9a8c55160d44123250135c599913de21055282536b8a8b33d976aabdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-568394",
	                "Source": "/var/lib/docker/volumes/addons-568394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-568394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-568394",
	                "name.minikube.sigs.k8s.io": "addons-568394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df8371da999f17342f4b44aff0193d128420bd3236365605cccfa2f2e0ddbecb",
	            "SandboxKey": "/var/run/docker/netns/df8371da999f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38458"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-568394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:cd:35:b0:5b:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a087820eb8ed70c451d6da6176a5e2b9ebfc3067b99634f05c9c1cdf6468bcf",
	                    "EndpointID": "d30364cbb6ad6ba09a13b40de31e5967424a7d04cb19fcc6341eab17d9689495",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-568394",
	                        "a413d8502110"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-568394 -n addons-568394
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-568394 logs -n 25: (1.471103538s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-000641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-000641   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-000641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-000641   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ start   │ -o=json --download-only -p download-only-805594 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-805594   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-805594                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-805594   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ start   │ -o=json --download-only -p download-only-304005 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-304005   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-304005                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-304005   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-000641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-000641   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-805594                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-805594   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-304005                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-304005   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ start   │ --download-only -p download-docker-892972 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-892972 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ -p download-docker-892972                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-892972 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ start   │ --download-only -p binary-mirror-854679 --alsologtostderr --binary-mirror http://127.0.0.1:46683 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-854679   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ -p binary-mirror-854679                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-854679   │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ addons  │ disable dashboard -p addons-568394                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ addons  │ enable dashboard -p addons-568394                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ start   │ -p addons-568394 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:11 UTC │
	│ addons  │ addons-568394 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	│ addons  │ addons-568394 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	│ addons  │ enable headlamp -p addons-568394 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-568394          │ jenkins │ v1.37.0 │ 22 Dec 25 00:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:09:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:09:30.092123 1882225 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:09:30.092261 1882225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:30.092272 1882225 out.go:374] Setting ErrFile to fd 2...
	I1222 00:09:30.092278 1882225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:30.092642 1882225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:09:30.093194 1882225 out.go:368] Setting JSON to false
	I1222 00:09:30.094093 1882225 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":111119,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:09:30.094636 1882225 start.go:143] virtualization:  
	I1222 00:09:30.097961 1882225 out.go:179] * [addons-568394] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:09:30.101695 1882225 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:09:30.101786 1882225 notify.go:221] Checking for updates...
	I1222 00:09:30.107481 1882225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:09:30.110511 1882225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:09:30.113432 1882225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:09:30.116250 1882225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:09:30.119198 1882225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:09:30.122422 1882225 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:09:30.158044 1882225 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:09:30.158186 1882225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:30.218626 1882225 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-22 00:09:30.208867904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:30.218734 1882225 docker.go:319] overlay module found
	I1222 00:09:30.221927 1882225 out.go:179] * Using the docker driver based on user configuration
	I1222 00:09:30.224737 1882225 start.go:309] selected driver: docker
	I1222 00:09:30.224756 1882225 start.go:928] validating driver "docker" against <nil>
	I1222 00:09:30.224771 1882225 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:09:30.225576 1882225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:30.287368 1882225 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-22 00:09:30.278430721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:30.287522 1882225 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:09:30.287740 1882225 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:09:30.290752 1882225 out.go:179] * Using Docker driver with root privileges
	I1222 00:09:30.293539 1882225 cni.go:84] Creating CNI manager for ""
	I1222 00:09:30.293606 1882225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:09:30.293619 1882225 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 00:09:30.293701 1882225 start.go:353] cluster config:
	{Name:addons-568394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1222 00:09:30.296817 1882225 out.go:179] * Starting "addons-568394" primary control-plane node in "addons-568394" cluster
	I1222 00:09:30.299619 1882225 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:09:30.302433 1882225 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:09:30.305122 1882225 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 00:09:30.305164 1882225 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 00:09:30.305177 1882225 cache.go:65] Caching tarball of preloaded images
	I1222 00:09:30.305197 1882225 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:09:30.305256 1882225 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:09:30.305279 1882225 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 00:09:30.305617 1882225 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/config.json ...
	I1222 00:09:30.305692 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/config.json: {Name:mkf6221c0ab697d4028b31a8261a2450c5619982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:30.324013 1882225 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:09:30.324035 1882225 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:09:30.324054 1882225 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:09:30.324091 1882225 start.go:360] acquireMachinesLock for addons-568394: {Name:mkd931fa31da2a4323eebbe142e9972411438b59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:09:30.324218 1882225 start.go:364] duration metric: took 93.06µs to acquireMachinesLock for "addons-568394"
	I1222 00:09:30.324249 1882225 start.go:93] Provisioning new machine with config: &{Name:addons-568394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:09:30.324319 1882225 start.go:125] createHost starting for "" (driver="docker")
	I1222 00:09:30.327587 1882225 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1222 00:09:30.327828 1882225 start.go:159] libmachine.API.Create for "addons-568394" (driver="docker")
	I1222 00:09:30.327870 1882225 client.go:173] LocalClient.Create starting
	I1222 00:09:30.327996 1882225 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 00:09:30.571719 1882225 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 00:09:30.734596 1882225 cli_runner.go:164] Run: docker network inspect addons-568394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 00:09:30.752030 1882225 cli_runner.go:211] docker network inspect addons-568394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 00:09:30.752126 1882225 network_create.go:284] running [docker network inspect addons-568394] to gather additional debugging logs...
	I1222 00:09:30.752146 1882225 cli_runner.go:164] Run: docker network inspect addons-568394
	W1222 00:09:30.769656 1882225 cli_runner.go:211] docker network inspect addons-568394 returned with exit code 1
	I1222 00:09:30.769699 1882225 network_create.go:287] error running [docker network inspect addons-568394]: docker network inspect addons-568394: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-568394 not found
	I1222 00:09:30.769712 1882225 network_create.go:289] output of [docker network inspect addons-568394]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-568394 not found
	
	** /stderr **
	I1222 00:09:30.769814 1882225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:09:30.787985 1882225 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d2740}
	I1222 00:09:30.788034 1882225 network_create.go:124] attempt to create docker network addons-568394 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1222 00:09:30.788089 1882225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-568394 addons-568394
	I1222 00:09:30.849632 1882225 network_create.go:108] docker network addons-568394 192.168.49.0/24 created
	I1222 00:09:30.849665 1882225 kic.go:121] calculated static IP "192.168.49.2" for the "addons-568394" container
	I1222 00:09:30.849737 1882225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 00:09:30.865028 1882225 cli_runner.go:164] Run: docker volume create addons-568394 --label name.minikube.sigs.k8s.io=addons-568394 --label created_by.minikube.sigs.k8s.io=true
	I1222 00:09:30.882233 1882225 oci.go:103] Successfully created a docker volume addons-568394
	I1222 00:09:30.882315 1882225 cli_runner.go:164] Run: docker run --rm --name addons-568394-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568394 --entrypoint /usr/bin/test -v addons-568394:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 00:09:31.992356 1882225 cli_runner.go:217] Completed: docker run --rm --name addons-568394-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568394 --entrypoint /usr/bin/test -v addons-568394:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib: (1.110001063s)
	I1222 00:09:31.992389 1882225 oci.go:107] Successfully prepared a docker volume addons-568394
	I1222 00:09:31.992442 1882225 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 00:09:31.992457 1882225 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 00:09:31.992531 1882225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-568394:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 00:09:35.930569 1882225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-568394:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.937986805s)
	I1222 00:09:35.930607 1882225 kic.go:203] duration metric: took 3.938146374s to extract preloaded images to volume ...
	W1222 00:09:35.930763 1882225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 00:09:35.930888 1882225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 00:09:35.993368 1882225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-568394 --name addons-568394 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568394 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-568394 --network addons-568394 --ip 192.168.49.2 --volume addons-568394:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 00:09:36.282470 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Running}}
	I1222 00:09:36.301729 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:09:36.325796 1882225 cli_runner.go:164] Run: docker exec addons-568394 stat /var/lib/dpkg/alternatives/iptables
	I1222 00:09:36.379185 1882225 oci.go:144] the created container "addons-568394" has a running status.
	I1222 00:09:36.379219 1882225 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa...
	I1222 00:09:36.800074 1882225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 00:09:36.830546 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:09:36.854478 1882225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 00:09:36.854518 1882225 kic_runner.go:114] Args: [docker exec --privileged addons-568394 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 00:09:36.896385 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:09:36.913840 1882225 machine.go:94] provisionDockerMachine start ...
	I1222 00:09:36.913952 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:36.931093 1882225 main.go:144] libmachine: Using SSH client type: native
	I1222 00:09:36.931445 1882225 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38456 <nil> <nil>}
	I1222 00:09:36.931461 1882225 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:09:36.932094 1882225 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 00:09:40.081246 1882225 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-568394
	
	I1222 00:09:40.081272 1882225 ubuntu.go:182] provisioning hostname "addons-568394"
	I1222 00:09:40.081374 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:40.105602 1882225 main.go:144] libmachine: Using SSH client type: native
	I1222 00:09:40.105938 1882225 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38456 <nil> <nil>}
	I1222 00:09:40.105955 1882225 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-568394 && echo "addons-568394" | sudo tee /etc/hostname
	I1222 00:09:40.251205 1882225 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-568394
	
	I1222 00:09:40.251280 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:40.272553 1882225 main.go:144] libmachine: Using SSH client type: native
	I1222 00:09:40.272889 1882225 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38456 <nil> <nil>}
	I1222 00:09:40.272911 1882225 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-568394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-568394/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-568394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:09:40.405520 1882225 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:09:40.405555 1882225 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:09:40.405580 1882225 ubuntu.go:190] setting up certificates
	I1222 00:09:40.405596 1882225 provision.go:84] configureAuth start
	I1222 00:09:40.405660 1882225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568394
	I1222 00:09:40.422116 1882225 provision.go:143] copyHostCerts
	I1222 00:09:40.422201 1882225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:09:40.422313 1882225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:09:40.422373 1882225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:09:40.422423 1882225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.addons-568394 san=[127.0.0.1 192.168.49.2 addons-568394 localhost minikube]
	I1222 00:09:40.926868 1882225 provision.go:177] copyRemoteCerts
	I1222 00:09:40.926934 1882225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:09:40.926974 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:40.943704 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:41.041006 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:09:41.060237 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1222 00:09:41.076544 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:09:41.092823 1882225 provision.go:87] duration metric: took 687.197497ms to configureAuth
	I1222 00:09:41.092892 1882225 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:09:41.093102 1882225 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:09:41.093203 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:41.110013 1882225 main.go:144] libmachine: Using SSH client type: native
	I1222 00:09:41.110321 1882225 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38456 <nil> <nil>}
	I1222 00:09:41.110339 1882225 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:09:41.736796 1882225 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:09:41.736861 1882225 machine.go:97] duration metric: took 4.822992527s to provisionDockerMachine
	I1222 00:09:41.736900 1882225 client.go:176] duration metric: took 11.409005155s to LocalClient.Create
	I1222 00:09:41.736948 1882225 start.go:167] duration metric: took 11.409121s to libmachine.API.Create "addons-568394"
	I1222 00:09:41.736974 1882225 start.go:293] postStartSetup for "addons-568394" (driver="docker")
	I1222 00:09:41.737011 1882225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:09:41.737136 1882225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:09:41.737261 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:41.754205 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:41.849196 1882225 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:09:41.852377 1882225 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:09:41.852406 1882225 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:09:41.852418 1882225 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:09:41.852519 1882225 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:09:41.852553 1882225 start.go:296] duration metric: took 115.546851ms for postStartSetup
	I1222 00:09:41.852884 1882225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568394
	I1222 00:09:41.869222 1882225 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/config.json ...
	I1222 00:09:41.869536 1882225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:09:41.869597 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:41.889261 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:41.986944 1882225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:09:41.991487 1882225 start.go:128] duration metric: took 11.66714782s to createHost
	I1222 00:09:41.991513 1882225 start.go:83] releasing machines lock for "addons-568394", held for 11.66728183s
	I1222 00:09:41.991587 1882225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568394
	I1222 00:09:42.014100 1882225 ssh_runner.go:195] Run: cat /version.json
	I1222 00:09:42.014165 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:42.014475 1882225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:09:42.014551 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:09:42.043413 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:42.053034 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:09:42.150697 1882225 ssh_runner.go:195] Run: systemctl --version
	I1222 00:09:42.300167 1882225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:09:42.353379 1882225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:09:42.358472 1882225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:09:42.358617 1882225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:09:42.392361 1882225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 00:09:42.392438 1882225 start.go:496] detecting cgroup driver to use...
	I1222 00:09:42.392486 1882225 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:09:42.392587 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:09:42.411099 1882225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:09:42.423896 1882225 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:09:42.423958 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:09:42.441664 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:09:42.460000 1882225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:09:42.581955 1882225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:09:42.706497 1882225 docker.go:234] disabling docker service ...
	I1222 00:09:42.706608 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:09:42.732037 1882225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:09:42.745192 1882225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:09:42.876402 1882225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:09:43.008758 1882225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:09:43.022733 1882225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:09:43.036946 1882225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:09:43.037035 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.045835 1882225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:09:43.045902 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.054509 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.062786 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.071638 1882225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:09:43.079736 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.088443 1882225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.101669 1882225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:09:43.110304 1882225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:09:43.118734 1882225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:09:43.125892 1882225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:09:43.245807 1882225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:09:43.417302 1882225 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:09:43.417476 1882225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:09:43.421286 1882225 start.go:564] Will wait 60s for crictl version
	I1222 00:09:43.421390 1882225 ssh_runner.go:195] Run: which crictl
	I1222 00:09:43.424940 1882225 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:09:43.456739 1882225 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:09:43.456910 1882225 ssh_runner.go:195] Run: crio --version
	I1222 00:09:43.487458 1882225 ssh_runner.go:195] Run: crio --version
	I1222 00:09:43.524870 1882225 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 00:09:43.527743 1882225 cli_runner.go:164] Run: docker network inspect addons-568394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:09:43.543405 1882225 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:09:43.547421 1882225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 00:09:43.557008 1882225 kubeadm.go:884] updating cluster {Name:addons-568394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:09:43.557126 1882225 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 00:09:43.557185 1882225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:09:43.601035 1882225 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:09:43.601058 1882225 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:09:43.601116 1882225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:09:43.626853 1882225 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:09:43.626879 1882225 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:09:43.626888 1882225 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1222 00:09:43.626974 1882225 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-568394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:09:43.627057 1882225 ssh_runner.go:195] Run: crio config
	I1222 00:09:43.698400 1882225 cni.go:84] Creating CNI manager for ""
	I1222 00:09:43.698423 1882225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:09:43.698444 1882225 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:09:43.698486 1882225 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-568394 NodeName:addons-568394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:09:43.698645 1882225 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-568394"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:09:43.698719 1882225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 00:09:43.707424 1882225 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:09:43.707497 1882225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:09:43.714927 1882225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1222 00:09:43.727546 1882225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 00:09:43.739976 1882225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1222 00:09:43.752466 1882225 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:09:43.756081 1882225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 00:09:43.765242 1882225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:09:43.879267 1882225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:09:43.895146 1882225 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394 for IP: 192.168.49.2
	I1222 00:09:43.895169 1882225 certs.go:195] generating shared ca certs ...
	I1222 00:09:43.895185 1882225 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:43.895366 1882225 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:09:44.114888 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt ...
	I1222 00:09:44.114922 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt: {Name:mk199eb237eff97383e8528ff6c07c45d31d4b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.115135 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key ...
	I1222 00:09:44.115150 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key: {Name:mka96d6e3c8770000fbb53f13617293bc772777a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.115241 1882225 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:09:44.286676 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt ...
	I1222 00:09:44.286705 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt: {Name:mk53b4c41f4966d6848c53a3628bf41792ce1991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.286879 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key ...
	I1222 00:09:44.286891 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key: {Name:mk45a334a5e477374ae3052171b7454a5db10fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.286972 1882225 certs.go:257] generating profile certs ...
	I1222 00:09:44.287035 1882225 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.key
	I1222 00:09:44.287051 1882225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt with IP's: []
	I1222 00:09:44.355565 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt ...
	I1222 00:09:44.355591 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: {Name:mk5bf503a5f32042024c92f01a32d3a4f3d760d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.355766 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.key ...
	I1222 00:09:44.355780 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.key: {Name:mk4d333b1c7588ab00a81adc489e41d7bf26c44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.355863 1882225 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key.796e61e1
	I1222 00:09:44.355884 1882225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt.796e61e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1222 00:09:44.470753 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt.796e61e1 ...
	I1222 00:09:44.470786 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt.796e61e1: {Name:mk4138bafddab23f0b2abd1677fe834d17f22e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.470963 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key.796e61e1 ...
	I1222 00:09:44.470978 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key.796e61e1: {Name:mkd67fe0a0c2234cb58b7487a99841414560f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:44.471064 1882225 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt.796e61e1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt
	I1222 00:09:44.471149 1882225 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key.796e61e1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key
	I1222 00:09:44.471205 1882225 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.key
	I1222 00:09:44.471222 1882225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.crt with IP's: []
	I1222 00:09:45.082545 1882225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.crt ...
	I1222 00:09:45.082582 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.crt: {Name:mka5c990ec669d60475595f0992e7645039ee68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:45.082788 1882225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.key ...
	I1222 00:09:45.082801 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.key: {Name:mk23720dbc5119a1bfc55a4799448e475b8f2236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:45.083008 1882225 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:09:45.083067 1882225 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:09:45.083105 1882225 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:09:45.083146 1882225 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:09:45.083793 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:09:45.110937 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:09:45.137081 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:09:45.161188 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:09:45.194761 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1222 00:09:45.229896 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 00:09:45.259858 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:09:45.285203 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:09:45.309887 1882225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:09:45.329733 1882225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:09:45.348031 1882225 ssh_runner.go:195] Run: openssl version
	I1222 00:09:45.364110 1882225 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:09:45.374811 1882225 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:09:45.387263 1882225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:09:45.392895 1882225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:09:45.393011 1882225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:09:45.445532 1882225 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:09:45.455565 1882225 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 00:09:45.463677 1882225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:09:45.467165 1882225 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 00:09:45.467216 1882225 kubeadm.go:401] StartCluster: {Name:addons-568394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-568394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:09:45.467291 1882225 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:09:45.467351 1882225 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:09:45.494656 1882225 cri.go:96] found id: ""
	I1222 00:09:45.494768 1882225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:09:45.502581 1882225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:09:45.510441 1882225 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:09:45.510504 1882225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:09:45.518351 1882225 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:09:45.518372 1882225 kubeadm.go:158] found existing configuration files:
	
	I1222 00:09:45.518421 1882225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 00:09:45.526123 1882225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:09:45.526216 1882225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:09:45.534143 1882225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 00:09:45.541796 1882225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:09:45.541888 1882225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:09:45.549127 1882225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 00:09:45.556595 1882225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:09:45.556687 1882225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:09:45.564182 1882225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 00:09:45.571909 1882225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:09:45.571998 1882225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:09:45.579565 1882225 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:09:45.622935 1882225 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 00:09:45.622993 1882225 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:09:45.645728 1882225 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:09:45.645811 1882225 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:09:45.645854 1882225 kubeadm.go:319] OS: Linux
	I1222 00:09:45.645908 1882225 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:09:45.645964 1882225 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:09:45.646027 1882225 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:09:45.646086 1882225 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:09:45.646138 1882225 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:09:45.646189 1882225 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:09:45.646247 1882225 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:09:45.646304 1882225 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:09:45.646357 1882225 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:09:45.710195 1882225 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:09:45.710316 1882225 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:09:45.710412 1882225 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:09:45.718622 1882225 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:09:45.724282 1882225 out.go:252]   - Generating certificates and keys ...
	I1222 00:09:45.724448 1882225 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:09:45.724563 1882225 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:09:46.082520 1882225 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 00:09:46.625536 1882225 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 00:09:46.890394 1882225 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 00:09:47.380002 1882225 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 00:09:48.124909 1882225 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 00:09:48.125288 1882225 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-568394 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 00:09:49.912284 1882225 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 00:09:49.912639 1882225 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-568394 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 00:09:50.888500 1882225 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 00:09:51.197769 1882225 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 00:09:51.616166 1882225 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 00:09:51.616453 1882225 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:09:51.943773 1882225 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:09:52.272456 1882225 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:09:52.867850 1882225 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:09:53.245460 1882225 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:09:53.402093 1882225 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:09:53.402712 1882225 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:09:53.407209 1882225 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:09:53.411054 1882225 out.go:252]   - Booting up control plane ...
	I1222 00:09:53.411157 1882225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:09:53.411233 1882225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:09:53.412054 1882225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:09:53.433885 1882225 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:09:53.434315 1882225 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:09:53.441727 1882225 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:09:53.442053 1882225 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:09:53.442100 1882225 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:09:53.565266 1882225 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:09:53.565410 1882225 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:09:55.066153 1882225 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500946045s
	I1222 00:09:55.069612 1882225 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 00:09:55.069730 1882225 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1222 00:09:55.069825 1882225 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 00:09:55.069913 1882225 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1222 00:09:58.870530 1882225 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.80047348s
	I1222 00:10:00.829675 1882225 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.76013097s
	I1222 00:10:01.571429 1882225 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501579933s
	I1222 00:10:01.608579 1882225 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 00:10:01.625234 1882225 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 00:10:01.639815 1882225 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 00:10:01.640026 1882225 kubeadm.go:319] [mark-control-plane] Marking the node addons-568394 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 00:10:01.656494 1882225 kubeadm.go:319] [bootstrap-token] Using token: tibsgw.42l7uf9d6um2buj4
	I1222 00:10:01.659561 1882225 out.go:252]   - Configuring RBAC rules ...
	I1222 00:10:01.659700 1882225 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 00:10:01.664558 1882225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 00:10:01.674909 1882225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 00:10:01.679760 1882225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 00:10:01.684432 1882225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 00:10:01.691259 1882225 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 00:10:01.981312 1882225 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 00:10:02.415271 1882225 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 00:10:02.978770 1882225 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 00:10:02.979995 1882225 kubeadm.go:319] 
	I1222 00:10:02.980069 1882225 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 00:10:02.980081 1882225 kubeadm.go:319] 
	I1222 00:10:02.980154 1882225 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 00:10:02.980161 1882225 kubeadm.go:319] 
	I1222 00:10:02.980185 1882225 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 00:10:02.980251 1882225 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 00:10:02.980302 1882225 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 00:10:02.980309 1882225 kubeadm.go:319] 
	I1222 00:10:02.980360 1882225 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 00:10:02.980368 1882225 kubeadm.go:319] 
	I1222 00:10:02.980412 1882225 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 00:10:02.980421 1882225 kubeadm.go:319] 
	I1222 00:10:02.980470 1882225 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 00:10:02.980543 1882225 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 00:10:02.980614 1882225 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 00:10:02.980623 1882225 kubeadm.go:319] 
	I1222 00:10:02.980702 1882225 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 00:10:02.980777 1882225 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 00:10:02.980785 1882225 kubeadm.go:319] 
	I1222 00:10:02.980863 1882225 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tibsgw.42l7uf9d6um2buj4 \
	I1222 00:10:02.980964 1882225 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d \
	I1222 00:10:02.980987 1882225 kubeadm.go:319] 	--control-plane 
	I1222 00:10:02.980992 1882225 kubeadm.go:319] 
	I1222 00:10:02.981071 1882225 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 00:10:02.981078 1882225 kubeadm.go:319] 
	I1222 00:10:02.981160 1882225 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tibsgw.42l7uf9d6um2buj4 \
	I1222 00:10:02.981259 1882225 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d 
	I1222 00:10:02.985715 1882225 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 00:10:02.985931 1882225 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:10:02.986040 1882225 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:10:02.986056 1882225 cni.go:84] Creating CNI manager for ""
	I1222 00:10:02.986063 1882225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:10:02.989413 1882225 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1222 00:10:02.992388 1882225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1222 00:10:02.996616 1882225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1222 00:10:02.996636 1882225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1222 00:10:03.013275 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1222 00:10:03.301132 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:03.301245 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-568394 minikube.k8s.io/updated_at=2025_12_22T00_10_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=addons-568394 minikube.k8s.io/primary=true
	I1222 00:10:03.301042 1882225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 00:10:03.447642 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:03.447651 1882225 ops.go:34] apiserver oom_adj: -16
	I1222 00:10:03.948556 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:04.448663 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:04.948063 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:05.448313 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:05.948598 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:06.448498 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:06.948528 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:07.448599 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:07.947761 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:08.447743 1882225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 00:10:08.593775 1882225 kubeadm.go:1114] duration metric: took 5.292692913s to wait for elevateKubeSystemPrivileges
	I1222 00:10:08.593805 1882225 kubeadm.go:403] duration metric: took 23.126593097s to StartCluster
	I1222 00:10:08.593822 1882225 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:10:08.593943 1882225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:10:08.594376 1882225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:10:08.594573 1882225 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:10:08.594769 1882225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 00:10:08.595029 1882225 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:10:08.595059 1882225 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1222 00:10:08.595137 1882225 addons.go:70] Setting yakd=true in profile "addons-568394"
	I1222 00:10:08.595156 1882225 addons.go:239] Setting addon yakd=true in "addons-568394"
	I1222 00:10:08.595177 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.595665 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.596087 1882225 addons.go:70] Setting inspektor-gadget=true in profile "addons-568394"
	I1222 00:10:08.596114 1882225 addons.go:239] Setting addon inspektor-gadget=true in "addons-568394"
	I1222 00:10:08.596147 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.596586 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.596802 1882225 addons.go:70] Setting metrics-server=true in profile "addons-568394"
	I1222 00:10:08.596820 1882225 addons.go:239] Setting addon metrics-server=true in "addons-568394"
	I1222 00:10:08.596846 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.597245 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.599383 1882225 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-568394"
	I1222 00:10:08.599424 1882225 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-568394"
	I1222 00:10:08.599566 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.599860 1882225 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-568394"
	I1222 00:10:08.599891 1882225 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-568394"
	I1222 00:10:08.599915 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.600340 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.600489 1882225 addons.go:70] Setting cloud-spanner=true in profile "addons-568394"
	I1222 00:10:08.600506 1882225 addons.go:239] Setting addon cloud-spanner=true in "addons-568394"
	I1222 00:10:08.600532 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.600916 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.606844 1882225 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-568394"
	I1222 00:10:08.606924 1882225 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-568394"
	I1222 00:10:08.606958 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.607462 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.607843 1882225 addons.go:70] Setting registry=true in profile "addons-568394"
	I1222 00:10:08.607867 1882225 addons.go:239] Setting addon registry=true in "addons-568394"
	I1222 00:10:08.607895 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.608315 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.621373 1882225 addons.go:70] Setting default-storageclass=true in profile "addons-568394"
	I1222 00:10:08.621407 1882225 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-568394"
	I1222 00:10:08.621749 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.629184 1882225 addons.go:70] Setting registry-creds=true in profile "addons-568394"
	I1222 00:10:08.629224 1882225 addons.go:239] Setting addon registry-creds=true in "addons-568394"
	I1222 00:10:08.629260 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.629761 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.640468 1882225 addons.go:70] Setting gcp-auth=true in profile "addons-568394"
	I1222 00:10:08.640517 1882225 mustload.go:66] Loading cluster: addons-568394
	I1222 00:10:08.640740 1882225 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:10:08.640994 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.650124 1882225 addons.go:70] Setting storage-provisioner=true in profile "addons-568394"
	I1222 00:10:08.650170 1882225 addons.go:239] Setting addon storage-provisioner=true in "addons-568394"
	I1222 00:10:08.650205 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.650673 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.653397 1882225 addons.go:70] Setting ingress=true in profile "addons-568394"
	I1222 00:10:08.653429 1882225 addons.go:239] Setting addon ingress=true in "addons-568394"
	I1222 00:10:08.653468 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.653944 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.675223 1882225 addons.go:70] Setting ingress-dns=true in profile "addons-568394"
	I1222 00:10:08.675256 1882225 addons.go:239] Setting addon ingress-dns=true in "addons-568394"
	I1222 00:10:08.675304 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.675774 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.676484 1882225 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-568394"
	I1222 00:10:08.676517 1882225 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-568394"
	I1222 00:10:08.676824 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.723206 1882225 out.go:179] * Verifying Kubernetes components...
	I1222 00:10:08.724345 1882225 addons.go:70] Setting volcano=true in profile "addons-568394"
	I1222 00:10:08.724379 1882225 addons.go:239] Setting addon volcano=true in "addons-568394"
	I1222 00:10:08.724421 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.724869 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.727443 1882225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:10:08.727920 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.756454 1882225 addons.go:70] Setting volumesnapshots=true in profile "addons-568394"
	I1222 00:10:08.756529 1882225 addons.go:239] Setting addon volumesnapshots=true in "addons-568394"
	I1222 00:10:08.756580 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.757097 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.801751 1882225 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1222 00:10:08.805460 1882225 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1222 00:10:08.805485 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1222 00:10:08.805552 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.860641 1882225 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1222 00:10:08.867986 1882225 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1222 00:10:08.868019 1882225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1222 00:10:08.868121 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.888892 1882225 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1222 00:10:08.894664 1882225 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1222 00:10:08.895425 1882225 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1222 00:10:08.895467 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1222 00:10:08.895566 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.916103 1882225 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1222 00:10:08.925187 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1222 00:10:08.925255 1882225 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1222 00:10:08.925369 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.932280 1882225 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1222 00:10:08.932753 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1222 00:10:08.949744 1882225 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1222 00:10:08.957997 1882225 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1222 00:10:08.958064 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1222 00:10:08.958166 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:08.962351 1882225 addons.go:239] Setting addon default-storageclass=true in "addons-568394"
	I1222 00:10:08.967505 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:08.968070 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:08.967264 1882225 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1222 00:10:09.004786 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1222 00:10:09.004869 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.005308 1882225 out.go:179]   - Using image docker.io/registry:3.0.0
	I1222 00:10:09.006568 1882225 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-568394"
	I1222 00:10:09.006677 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:09.007405 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:09.039750 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1222 00:10:09.043303 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	W1222 00:10:08.967348 1882225 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1222 00:10:09.047463 1882225 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:10:09.047626 1882225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1222 00:10:09.051150 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1222 00:10:09.048349 1882225 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1222 00:10:09.051354 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1222 00:10:09.051434 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.048431 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.059465 1882225 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:10:09.059481 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:10:09.059537 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.064588 1882225 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1222 00:10:09.065555 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1222 00:10:09.065702 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1222 00:10:09.065883 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:09.111665 1882225 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1222 00:10:09.111686 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1222 00:10:09.111753 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.158910 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1222 00:10:09.158930 1882225 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1222 00:10:09.158996 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.162679 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.163497 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.168544 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.169241 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1222 00:10:09.169376 1882225 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1222 00:10:09.170063 1882225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1222 00:10:09.173888 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1222 00:10:09.174077 1882225 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1222 00:10:09.174089 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1222 00:10:09.174156 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.185233 1882225 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:10:09.185257 1882225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:10:09.185315 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.186333 1882225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1222 00:10:09.188685 1882225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1222 00:10:09.189262 1882225 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1222 00:10:09.189284 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1222 00:10:09.189413 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.197639 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1222 00:10:09.197661 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1222 00:10:09.197725 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.205375 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.224274 1882225 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1222 00:10:09.228756 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.236435 1882225 out.go:179]   - Using image docker.io/busybox:stable
	I1222 00:10:09.241441 1882225 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1222 00:10:09.241475 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1222 00:10:09.241542 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:09.253906 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.278767 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.339738 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.340241 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.352483 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.352833 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.353978 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	W1222 00:10:09.355305 1882225 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1222 00:10:09.355340 1882225 retry.go:84] will retry after 300ms: ssh: handshake failed: EOF
	I1222 00:10:09.362376 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.371700 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:09.390720 1882225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:10:09.390795 1882225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 00:10:09.980058 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1222 00:10:09.980125 1882225 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1222 00:10:09.988342 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1222 00:10:09.994287 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:10:10.013910 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1222 00:10:10.095561 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1222 00:10:10.146056 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1222 00:10:10.218966 1882225 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1222 00:10:10.218986 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1222 00:10:10.256835 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1222 00:10:10.275254 1882225 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1222 00:10:10.275336 1882225 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1222 00:10:10.283459 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1222 00:10:10.283535 1882225 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1222 00:10:10.283951 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:10:10.348458 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1222 00:10:10.379977 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1222 00:10:10.449311 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1222 00:10:10.575159 1882225 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1222 00:10:10.575188 1882225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1222 00:10:10.632395 1882225 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1222 00:10:10.632428 1882225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1222 00:10:10.654475 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1222 00:10:10.654501 1882225 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1222 00:10:10.661423 1882225 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1222 00:10:10.661451 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1222 00:10:10.807425 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1222 00:10:10.807491 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1222 00:10:10.863325 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1222 00:10:10.917629 1882225 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1222 00:10:10.917700 1882225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1222 00:10:10.938740 1882225 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1222 00:10:10.938814 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1222 00:10:11.078709 1882225 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1222 00:10:11.078789 1882225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1222 00:10:11.097420 1882225 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1222 00:10:11.097486 1882225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1222 00:10:11.192972 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1222 00:10:11.193046 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1222 00:10:11.235255 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1222 00:10:11.350367 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1222 00:10:11.397073 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1222 00:10:11.397147 1882225 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1222 00:10:11.460012 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1222 00:10:11.460087 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1222 00:10:11.608360 1882225 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1222 00:10:11.608437 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1222 00:10:11.672875 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1222 00:10:11.957648 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1222 00:10:11.957720 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1222 00:10:12.208416 1882225 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1222 00:10:12.208487 1882225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1222 00:10:12.485584 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1222 00:10:12.485655 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1222 00:10:12.536380 1882225 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.145557426s)
	I1222 00:10:12.536449 1882225 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1222 00:10:12.537435 1882225 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.146681094s)
	I1222 00:10:12.537511 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.549096575s)
	I1222 00:10:12.538245 1882225 node_ready.go:35] waiting up to 6m0s for node "addons-568394" to be "Ready" ...
	I1222 00:10:12.787769 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1222 00:10:12.787806 1882225 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1222 00:10:13.046566 1882225 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-568394" context rescaled to 1 replicas
	I1222 00:10:13.166647 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1222 00:10:13.166675 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1222 00:10:13.304902 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1222 00:10:13.304926 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1222 00:10:13.486858 1882225 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1222 00:10:13.486928 1882225 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1222 00:10:13.657967 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1222 00:10:14.211106 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.216743986s)
	W1222 00:10:14.541876 1882225 node_ready.go:57] node "addons-568394" has "Ready":"False" status (will retry)
	I1222 00:10:14.672150 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.658051087s)
	I1222 00:10:14.672274 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.576691997s)
	I1222 00:10:14.672356 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.526281137s)
	I1222 00:10:14.672427 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.415530335s)
	I1222 00:10:14.672676 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.388682195s)
	I1222 00:10:14.672826 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.324287516s)
	I1222 00:10:14.672880 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.292842963s)
	W1222 00:10:14.752410 1882225 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1222 00:10:15.872353 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.422938082s)
	I1222 00:10:15.872429 1882225 addons.go:495] Verifying addon ingress=true in "addons-568394"
	I1222 00:10:15.872610 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.009215824s)
	I1222 00:10:15.872644 1882225 addons.go:495] Verifying addon registry=true in "addons-568394"
	I1222 00:10:15.872921 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.637586209s)
	I1222 00:10:15.873187 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.522731687s)
	I1222 00:10:15.873597 1882225 addons.go:495] Verifying addon metrics-server=true in "addons-568394"
	I1222 00:10:15.873278 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.200329425s)
	W1222 00:10:15.873630 1882225 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1222 00:10:15.873649 1882225 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1222 00:10:15.875376 1882225 out.go:179] * Verifying ingress addon...
	I1222 00:10:15.877559 1882225 out.go:179] * Verifying registry addon...
	I1222 00:10:15.877559 1882225 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-568394 service yakd-dashboard -n yakd-dashboard
	
	I1222 00:10:15.881380 1882225 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1222 00:10:15.883192 1882225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1222 00:10:15.888407 1882225 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1222 00:10:15.888431 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:15.892835 1882225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1222 00:10:15.892858 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:16.123304 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1222 00:10:16.126225 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.468158219s)
	I1222 00:10:16.126298 1882225 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-568394"
	I1222 00:10:16.129234 1882225 out.go:179] * Verifying csi-hostpath-driver addon...
	I1222 00:10:16.133084 1882225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1222 00:10:16.138692 1882225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1222 00:10:16.138757 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:16.387135 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:16.387621 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:16.636336 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:16.676726 1882225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1222 00:10:16.676835 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:16.693863 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:16.802437 1882225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1222 00:10:16.816411 1882225 addons.go:239] Setting addon gcp-auth=true in "addons-568394"
	I1222 00:10:16.816458 1882225 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:10:16.816905 1882225 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:10:16.833752 1882225 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1222 00:10:16.833811 1882225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:10:16.851263 1882225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:10:16.888182 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:16.888612 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1222 00:10:17.041234 1882225 node_ready.go:57] node "addons-568394" has "Ready":"False" status (will retry)
	I1222 00:10:17.136278 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:17.385353 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:17.386379 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:17.635819 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:17.885539 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:17.886939 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:18.137024 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:18.386145 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:18.386522 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:18.638619 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:18.815340 1882225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.691947205s)
	I1222 00:10:18.815403 1882225 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.981628895s)
	I1222 00:10:18.818737 1882225 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1222 00:10:18.821575 1882225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1222 00:10:18.824434 1882225 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1222 00:10:18.824457 1882225 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1222 00:10:18.838569 1882225 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1222 00:10:18.838593 1882225 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1222 00:10:18.854090 1882225 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1222 00:10:18.854116 1882225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1222 00:10:18.868837 1882225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1222 00:10:18.887049 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:18.887959 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1222 00:10:19.042398 1882225 node_ready.go:57] node "addons-568394" has "Ready":"False" status (will retry)
	I1222 00:10:19.136454 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:19.361092 1882225 addons.go:495] Verifying addon gcp-auth=true in "addons-568394"
	I1222 00:10:19.364189 1882225 out.go:179] * Verifying gcp-auth addon...
	I1222 00:10:19.367839 1882225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1222 00:10:19.379305 1882225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1222 00:10:19.379378 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:19.478764 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:19.478767 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:19.636106 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:19.871102 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:19.884991 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:19.887146 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:20.136915 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:20.371042 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:20.384836 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:20.388200 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:20.637159 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:20.871195 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:20.885430 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:20.886535 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:21.136836 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:21.370639 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:21.384685 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:21.386550 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1222 00:10:21.541480 1882225 node_ready.go:57] node "addons-568394" has "Ready":"False" status (will retry)
	I1222 00:10:21.636840 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:21.871089 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:21.884727 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:21.886163 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:22.136936 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:22.370899 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:22.384951 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:22.386974 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:22.637749 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:22.871096 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:22.884927 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:22.886039 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:23.137411 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:23.371350 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:23.385381 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:23.386026 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:23.575657 1882225 node_ready.go:49] node "addons-568394" is "Ready"
	I1222 00:10:23.575694 1882225 node_ready.go:38] duration metric: took 11.037398399s for node "addons-568394" to be "Ready" ...
	I1222 00:10:23.575730 1882225 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:10:23.575815 1882225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:10:23.598925 1882225 api_server.go:72] duration metric: took 15.004321773s to wait for apiserver process to appear ...
	I1222 00:10:23.598953 1882225 api_server.go:88] waiting for apiserver healthz status ...
	I1222 00:10:23.598992 1882225 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1222 00:10:23.615350 1882225 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1222 00:10:23.628855 1882225 api_server.go:141] control plane version: v1.34.3
	I1222 00:10:23.628883 1882225 api_server.go:131] duration metric: took 29.923053ms to wait for apiserver health ...
	I1222 00:10:23.628892 1882225 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 00:10:23.673164 1882225 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1222 00:10:23.673191 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:23.674116 1882225 system_pods.go:59] 19 kube-system pods found
	I1222 00:10:23.674179 1882225 system_pods.go:61] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Pending
	I1222 00:10:23.674195 1882225 system_pods.go:61] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending
	I1222 00:10:23.674201 1882225 system_pods.go:61] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending
	I1222 00:10:23.674212 1882225 system_pods.go:61] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending
	I1222 00:10:23.674217 1882225 system_pods.go:61] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:23.674221 1882225 system_pods.go:61] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:23.674225 1882225 system_pods.go:61] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:23.674229 1882225 system_pods.go:61] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:23.674233 1882225 system_pods.go:61] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending
	I1222 00:10:23.674267 1882225 system_pods.go:61] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:23.674279 1882225 system_pods.go:61] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:23.674285 1882225 system_pods.go:61] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending
	I1222 00:10:23.674294 1882225 system_pods.go:61] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending
	I1222 00:10:23.674299 1882225 system_pods.go:61] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending
	I1222 00:10:23.674302 1882225 system_pods.go:61] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending
	I1222 00:10:23.674306 1882225 system_pods.go:61] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending
	I1222 00:10:23.674318 1882225 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending
	I1222 00:10:23.674326 1882225 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending
	I1222 00:10:23.674330 1882225 system_pods.go:61] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Pending
	I1222 00:10:23.674347 1882225 system_pods.go:74] duration metric: took 45.415374ms to wait for pod list to return data ...
	I1222 00:10:23.674360 1882225 default_sa.go:34] waiting for default service account to be created ...
	I1222 00:10:23.689706 1882225 default_sa.go:45] found service account: "default"
	I1222 00:10:23.689734 1882225 default_sa.go:55] duration metric: took 15.367787ms for default service account to be created ...
	I1222 00:10:23.689743 1882225 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 00:10:23.732995 1882225 system_pods.go:86] 19 kube-system pods found
	I1222 00:10:23.733031 1882225 system_pods.go:89] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Pending
	I1222 00:10:23.733039 1882225 system_pods.go:89] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending
	I1222 00:10:23.733043 1882225 system_pods.go:89] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending
	I1222 00:10:23.733070 1882225 system_pods.go:89] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending
	I1222 00:10:23.733078 1882225 system_pods.go:89] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:23.733082 1882225 system_pods.go:89] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:23.733087 1882225 system_pods.go:89] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:23.733091 1882225 system_pods.go:89] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:23.733099 1882225 system_pods.go:89] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending
	I1222 00:10:23.733110 1882225 system_pods.go:89] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:23.733119 1882225 system_pods.go:89] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:23.733123 1882225 system_pods.go:89] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending
	I1222 00:10:23.733127 1882225 system_pods.go:89] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending
	I1222 00:10:23.733145 1882225 system_pods.go:89] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending
	I1222 00:10:23.733159 1882225 system_pods.go:89] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1222 00:10:23.733177 1882225 system_pods.go:89] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending
	I1222 00:10:23.733189 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending
	I1222 00:10:23.733193 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending
	I1222 00:10:23.733198 1882225 system_pods.go:89] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Pending
	I1222 00:10:23.733216 1882225 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1222 00:10:23.925791 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:23.926010 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:23.967190 1882225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1222 00:10:23.967215 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:23.984389 1882225 system_pods.go:86] 19 kube-system pods found
	I1222 00:10:23.984421 1882225 system_pods.go:89] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Pending
	I1222 00:10:23.984428 1882225 system_pods.go:89] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending
	I1222 00:10:23.984432 1882225 system_pods.go:89] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending
	I1222 00:10:23.984474 1882225 system_pods.go:89] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending
	I1222 00:10:23.984485 1882225 system_pods.go:89] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:23.984491 1882225 system_pods.go:89] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:23.984495 1882225 system_pods.go:89] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:23.984500 1882225 system_pods.go:89] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:23.984510 1882225 system_pods.go:89] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending
	I1222 00:10:23.984514 1882225 system_pods.go:89] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:23.984520 1882225 system_pods.go:89] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:23.984541 1882225 system_pods.go:89] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending
	I1222 00:10:23.984560 1882225 system_pods.go:89] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending
	I1222 00:10:23.984573 1882225 system_pods.go:89] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending
	I1222 00:10:23.984587 1882225 system_pods.go:89] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1222 00:10:23.984601 1882225 system_pods.go:89] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending
	I1222 00:10:23.984608 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending
	I1222 00:10:23.984620 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:23.984637 1882225 system_pods.go:89] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 00:10:24.139948 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:24.262984 1882225 system_pods.go:86] 19 kube-system pods found
	I1222 00:10:24.263078 1882225 system_pods.go:89] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 00:10:24.263100 1882225 system_pods.go:89] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending
	I1222 00:10:24.263119 1882225 system_pods.go:89] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending
	I1222 00:10:24.263154 1882225 system_pods.go:89] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1222 00:10:24.263174 1882225 system_pods.go:89] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:24.263193 1882225 system_pods.go:89] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:24.263213 1882225 system_pods.go:89] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:24.263248 1882225 system_pods.go:89] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:24.263269 1882225 system_pods.go:89] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1222 00:10:24.263288 1882225 system_pods.go:89] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:24.263308 1882225 system_pods.go:89] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:24.263342 1882225 system_pods.go:89] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1222 00:10:24.263362 1882225 system_pods.go:89] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1222 00:10:24.263382 1882225 system_pods.go:89] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1222 00:10:24.263412 1882225 system_pods.go:89] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1222 00:10:24.263436 1882225 system_pods.go:89] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1222 00:10:24.263456 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:24.263476 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:24.263507 1882225 system_pods.go:89] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 00:10:24.375688 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:24.394992 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:24.400361 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:24.655523 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:24.741191 1882225 system_pods.go:86] 19 kube-system pods found
	I1222 00:10:24.741281 1882225 system_pods.go:89] "coredns-66bc5c9577-6hqzq" [138d05ad-94eb-40af-9bc1-7a7b499f8109] Running
	I1222 00:10:24.741308 1882225 system_pods.go:89] "csi-hostpath-attacher-0" [bda0960c-4737-4854-b19d-e8b977eff963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1222 00:10:24.741343 1882225 system_pods.go:89] "csi-hostpath-resizer-0" [7a138983-aa50-4f54-a627-02285a1aa0af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1222 00:10:24.741370 1882225 system_pods.go:89] "csi-hostpathplugin-26hr6" [ee0d50e4-b22f-4d33-9e52-daeb73334011] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1222 00:10:24.741394 1882225 system_pods.go:89] "etcd-addons-568394" [3ccb9d0f-31af-47df-b579-61e871a69c84] Running
	I1222 00:10:24.741422 1882225 system_pods.go:89] "kindnet-qsbwv" [5e645827-f844-4c6e-9464-3079faff08c8] Running
	I1222 00:10:24.741442 1882225 system_pods.go:89] "kube-apiserver-addons-568394" [4505b0f4-de5d-4b81-8603-26388c22b595] Running
	I1222 00:10:24.741460 1882225 system_pods.go:89] "kube-controller-manager-addons-568394" [f746dffa-88f0-4747-a279-7da23eed30e6] Running
	I1222 00:10:24.741499 1882225 system_pods.go:89] "kube-ingress-dns-minikube" [e937a647-01d8-4853-8eb8-f062adee6f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1222 00:10:24.741518 1882225 system_pods.go:89] "kube-proxy-7rf5z" [7a9184b3-857d-4d39-b7a0-f0b8cbfd1f4b] Running
	I1222 00:10:24.741538 1882225 system_pods.go:89] "kube-scheduler-addons-568394" [16afd3a3-c6e6-4224-81b3-33b8f5642913] Running
	I1222 00:10:24.741572 1882225 system_pods.go:89] "metrics-server-85b7d694d7-gdkf4" [3f711c3b-04cc-42d6-b22d-3ba58786cd62] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1222 00:10:24.741594 1882225 system_pods.go:89] "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1222 00:10:24.741614 1882225 system_pods.go:89] "registry-6b586f9694-g9bl2" [190b71a8-685c-4272-8ebb-519871d59ca7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1222 00:10:24.741637 1882225 system_pods.go:89] "registry-creds-764b6fb674-nbrkf" [e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1222 00:10:24.741665 1882225 system_pods.go:89] "registry-proxy-nsh85" [81f6291a-e2ed-4640-aae8-e3127f0aa931] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1222 00:10:24.741684 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bkzz5" [76771992-63ee-45d4-a47a-f07ed8dd3420] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:24.741704 1882225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dh27b" [f96e4090-f1a8-4ffc-828b-b8eadd9cb010] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1222 00:10:24.741738 1882225 system_pods.go:89] "storage-provisioner" [4fce859e-0463-43cf-b5c5-21ed16256332] Running
	I1222 00:10:24.741760 1882225 system_pods.go:126] duration metric: took 1.052010226s to wait for k8s-apps to be running ...
	I1222 00:10:24.741780 1882225 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 00:10:24.741856 1882225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:10:24.781154 1882225 system_svc.go:56] duration metric: took 39.364176ms WaitForService to wait for kubelet
	I1222 00:10:24.781237 1882225 kubeadm.go:587] duration metric: took 16.186638773s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:10:24.781274 1882225 node_conditions.go:102] verifying NodePressure condition ...
	I1222 00:10:24.784700 1882225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 00:10:24.784795 1882225 node_conditions.go:123] node cpu capacity is 2
	I1222 00:10:24.784825 1882225 node_conditions.go:105] duration metric: took 3.503883ms to run NodePressure ...
	I1222 00:10:24.784850 1882225 start.go:242] waiting for startup goroutines ...
	I1222 00:10:24.872000 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:24.885114 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:24.887223 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:25.137994 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:25.371493 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:25.385455 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:25.387147 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:25.636993 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:25.871678 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:25.885416 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:25.887869 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:26.138290 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:26.371059 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:26.384880 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:26.387200 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:26.637766 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:26.870728 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:26.885248 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:26.887749 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:27.137311 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:27.372681 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:27.390768 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:27.391327 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:27.637247 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:27.871532 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:27.886102 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:27.888187 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:28.137089 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:28.371717 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:28.387174 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:28.389284 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:28.639388 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:28.871623 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:28.885789 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:28.888481 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:29.140231 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:29.374128 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:29.386361 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:29.390099 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:29.636969 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:29.871257 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:29.886593 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:29.887091 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:30.143233 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:30.372392 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:30.387841 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:30.388290 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:30.637595 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:30.892766 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:30.894007 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:30.894749 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:31.138099 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:31.371732 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:31.385518 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:31.387845 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:31.637478 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:31.871194 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:31.885921 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:31.887142 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:32.136232 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:32.371158 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:32.385956 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:32.387161 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:32.636841 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:32.871225 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:32.887027 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:32.887478 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:33.137012 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:33.371471 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:33.385860 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:33.387450 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:33.636852 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:33.870774 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:33.884746 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:33.886839 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:34.138124 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:34.370623 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:34.385761 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:34.387397 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:34.637526 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:34.871065 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:34.884974 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:34.886952 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:35.136644 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:35.371370 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:35.385981 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:35.387707 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:35.637416 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:35.871317 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:35.884465 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:35.886881 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:36.136240 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:36.371300 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:36.387817 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:36.387868 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:36.656973 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:36.871150 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:36.887479 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:36.887842 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:37.138305 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:37.371939 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:37.388833 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:37.390117 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:37.636524 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:37.875754 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:37.886311 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:37.888043 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:38.136671 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:38.378165 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:38.387925 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:38.389937 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:38.636607 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:38.870581 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:38.884745 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:38.887604 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:39.136535 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:39.370814 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:39.384866 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:39.386762 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:39.637879 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:39.871435 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:39.885449 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:39.886949 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:40.137182 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:40.371194 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:40.386454 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:40.386806 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:40.638328 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:40.871652 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:40.972688 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:40.973301 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:41.136453 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:41.371611 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:41.385068 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:41.387588 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:41.637261 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:41.871537 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:41.884190 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:41.886862 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:42.142900 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:42.372220 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:42.388517 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:42.394260 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:42.636806 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:42.870530 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:42.884776 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:42.887074 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:43.137114 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:43.371295 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:43.386891 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:43.388231 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:43.636586 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:43.871780 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:43.887685 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:43.888179 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:44.136248 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:44.370774 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:44.384675 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:44.387124 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:44.636430 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:44.875112 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:44.899630 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:44.912328 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:45.139207 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:45.374005 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:45.386692 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:45.388744 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:45.637709 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:45.870837 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:45.884771 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:45.886498 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:46.137629 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:46.371646 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:46.385756 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:46.387322 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:46.636701 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:46.872072 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:46.886395 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:46.887873 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:47.136515 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:47.371555 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:47.385639 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:47.386908 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:47.636857 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:47.871706 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:47.885172 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:47.887782 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:48.137179 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:48.371131 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:48.386418 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:48.386623 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:48.636859 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:48.870880 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:48.886303 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:48.887148 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:49.136705 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:49.371310 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:49.385389 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:49.386834 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:49.636835 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:49.871064 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:49.886238 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:49.887926 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:50.136625 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:50.370420 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:50.384347 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:50.386059 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:50.636831 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:50.871761 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:50.885273 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:50.974255 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:51.136884 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:51.371561 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:51.384450 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:51.386641 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:51.637199 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:51.876440 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:51.884837 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:51.892935 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:52.137405 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:52.371747 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:52.385637 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:52.387076 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:52.636878 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:52.886264 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:52.888259 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:52.891729 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:53.137746 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:53.370615 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:53.384862 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:53.387188 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:53.637061 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:53.872502 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:53.886530 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:53.886705 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:54.136960 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:54.371712 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:54.385416 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:54.387937 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:54.638888 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:54.876499 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:54.887532 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:54.887661 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:55.166716 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:55.371332 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:55.385365 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:55.385921 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:55.637842 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:55.871663 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:55.972802 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:55.973006 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:56.138100 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:56.371378 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:56.385842 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:56.387706 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:56.637739 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:56.872528 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:56.886108 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:56.886863 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:57.137744 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:57.370738 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:57.384865 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:57.387345 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:57.636502 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:57.871794 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:57.885405 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:57.886381 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:58.136598 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:58.370751 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:58.386312 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:58.386407 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:58.637141 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:58.871736 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:58.886360 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:58.890609 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:59.137730 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:59.374552 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:59.390073 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:10:59.390492 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:59.637857 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:10:59.870851 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:10:59.885221 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:10:59.886707 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:00.145499 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:00.372628 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:00.387778 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:00.388436 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:00.636659 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:00.871734 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:00.884976 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:00.887076 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:01.138300 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:01.371375 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:01.384949 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:01.388932 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:01.637581 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:01.871599 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:01.885665 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:01.887240 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:02.136494 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:02.372457 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:02.385550 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:02.388880 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:02.637801 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:02.872431 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:02.885313 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:02.887127 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:03.136576 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:03.371586 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:03.387516 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:03.390009 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:03.637303 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:03.871838 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:03.886602 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:03.887790 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1222 00:11:04.137872 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:04.371003 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:04.386617 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:04.387078 1882225 kapi.go:107] duration metric: took 48.503885616s to wait for kubernetes.io/minikube-addons=registry ...
	I1222 00:11:04.641880 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:04.871899 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:04.885465 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:05.138146 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:05.371531 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:05.385024 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:05.637181 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:05.871692 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:05.884967 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:06.137765 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:06.371743 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:06.385003 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:06.638157 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:06.874493 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:06.886828 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:07.140149 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:07.372324 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:07.386269 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:07.637006 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:07.870615 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:07.884527 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:08.137753 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:08.371876 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:08.385080 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:08.636463 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:08.872469 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:08.885506 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:09.137724 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:09.371090 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:09.385293 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:09.640349 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:09.872899 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1222 00:11:09.885123 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:10.143126 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:10.371848 1882225 kapi.go:107] duration metric: took 51.004007988s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1222 00:11:10.374748 1882225 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-568394 cluster.
	I1222 00:11:10.377156 1882225 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1222 00:11:10.380081 1882225 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1222 00:11:10.385276 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:10.637581 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:10.886014 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:11.137535 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:11.385838 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:11.637591 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:11.884943 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:12.137849 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:12.385277 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:12.636691 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:12.886986 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:13.137754 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:13.385280 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:13.639905 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:13.888603 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:14.143319 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:14.385218 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:14.636987 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:14.885449 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:15.137156 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:15.385131 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:15.636674 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:15.886038 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:16.137037 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:16.385585 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:16.637637 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:16.885283 1882225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1222 00:11:17.136924 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:17.384919 1882225 kapi.go:107] duration metric: took 1m1.503539015s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1222 00:11:17.636933 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:18.138079 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:18.636461 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:19.137527 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:19.637908 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:20.137368 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:20.636881 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:21.137248 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:21.638444 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:22.137785 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:22.636279 1882225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1222 00:11:23.137237 1882225 kapi.go:107] duration metric: took 1m7.004153269s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1222 00:11:23.140336 1882225 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, inspektor-gadget, registry-creds, ingress-dns, cloud-spanner, amd-gpu-device-plugin, default-storageclass, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1222 00:11:23.143154 1882225 addons.go:530] duration metric: took 1m14.548084557s for enable addons: enabled=[nvidia-device-plugin storage-provisioner inspektor-gadget registry-creds ingress-dns cloud-spanner amd-gpu-device-plugin default-storageclass metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1222 00:11:23.143219 1882225 start.go:247] waiting for cluster config update ...
	I1222 00:11:23.143243 1882225 start.go:256] writing updated cluster config ...
	I1222 00:11:23.143560 1882225 ssh_runner.go:195] Run: rm -f paused
	I1222 00:11:23.149433 1882225 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 00:11:23.153956 1882225 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6hqzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.161073 1882225 pod_ready.go:94] pod "coredns-66bc5c9577-6hqzq" is "Ready"
	I1222 00:11:23.161156 1882225 pod_ready.go:86] duration metric: took 7.168539ms for pod "coredns-66bc5c9577-6hqzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.163671 1882225 pod_ready.go:83] waiting for pod "etcd-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.169038 1882225 pod_ready.go:94] pod "etcd-addons-568394" is "Ready"
	I1222 00:11:23.169102 1882225 pod_ready.go:86] duration metric: took 5.409009ms for pod "etcd-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.172559 1882225 pod_ready.go:83] waiting for pod "kube-apiserver-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.179549 1882225 pod_ready.go:94] pod "kube-apiserver-addons-568394" is "Ready"
	I1222 00:11:23.179577 1882225 pod_ready.go:86] duration metric: took 6.991511ms for pod "kube-apiserver-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.183896 1882225 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.553836 1882225 pod_ready.go:94] pod "kube-controller-manager-addons-568394" is "Ready"
	I1222 00:11:23.553872 1882225 pod_ready.go:86] duration metric: took 369.945153ms for pod "kube-controller-manager-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:23.753835 1882225 pod_ready.go:83] waiting for pod "kube-proxy-7rf5z" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:24.153436 1882225 pod_ready.go:94] pod "kube-proxy-7rf5z" is "Ready"
	I1222 00:11:24.153463 1882225 pod_ready.go:86] duration metric: took 399.598196ms for pod "kube-proxy-7rf5z" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:24.353872 1882225 pod_ready.go:83] waiting for pod "kube-scheduler-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:24.752894 1882225 pod_ready.go:94] pod "kube-scheduler-addons-568394" is "Ready"
	I1222 00:11:24.752924 1882225 pod_ready.go:86] duration metric: took 399.021317ms for pod "kube-scheduler-addons-568394" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 00:11:24.752939 1882225 pod_ready.go:40] duration metric: took 1.60347193s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 00:11:24.812957 1882225 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 00:11:24.816611 1882225 out.go:179] * Done! kubectl is now configured to use "addons-568394" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 00:11:22 addons-568394 crio[826]: time="2025-12-22T00:11:22.683280167Z" level=info msg="Created container ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0: kube-system/csi-hostpathplugin-26hr6/csi-snapshotter" id=79c5cdfc-753c-4277-b7e2-ae2fe9177e16 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 00:11:22 addons-568394 crio[826]: time="2025-12-22T00:11:22.685838304Z" level=info msg="Starting container: ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0" id=1d0533e6-cea2-485c-b322-3ffce3a30404 name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 00:11:22 addons-568394 crio[826]: time="2025-12-22T00:11:22.68962377Z" level=info msg="Started container" PID=5009 containerID=ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0 description=kube-system/csi-hostpathplugin-26hr6/csi-snapshotter id=1d0533e6-cea2-485c-b322-3ffce3a30404 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17eb3c21dbb4540de70b62adaa1a4b6946508f98d7ee1d73167cbf4c7e40e2fc
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.950762901Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ec9cde4d-6aa8-4f29-97fe-aa3243717b7e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.950850119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.958519222Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:25298aed1928d111070a44553022c3dba3258e33f9599d156f4fef52fc4c14d9 UID:112ce16b-8ebc-4364-9aa5-006a80b00eb5 NetNS:/var/run/netns/490853dc-b003-41f0-97f0-fc5bebbad0ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001dec6e0}] Aliases:map[]}"
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.958556752Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.97136736Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:25298aed1928d111070a44553022c3dba3258e33f9599d156f4fef52fc4c14d9 UID:112ce16b-8ebc-4364-9aa5-006a80b00eb5 NetNS:/var/run/netns/490853dc-b003-41f0-97f0-fc5bebbad0ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001dec6e0}] Aliases:map[]}"
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.971597211Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.974656167Z" level=info msg="Ran pod sandbox 25298aed1928d111070a44553022c3dba3258e33f9599d156f4fef52fc4c14d9 with infra container: default/busybox/POD" id=ec9cde4d-6aa8-4f29-97fe-aa3243717b7e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.975853917Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=af4134d3-7fc9-4ab6-8f31-a1a038cbb469 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.976029781Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=af4134d3-7fc9-4ab6-8f31-a1a038cbb469 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.976080126Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=af4134d3-7fc9-4ab6-8f31-a1a038cbb469 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.977121819Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9ea4445e-cc0c-4443-91eb-ca8947202478 name=/runtime.v1.ImageService/PullImage
	Dec 22 00:11:25 addons-568394 crio[826]: time="2025-12-22T00:11:25.979288041Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.80173497Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9ea4445e-cc0c-4443-91eb-ca8947202478 name=/runtime.v1.ImageService/PullImage
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.80238355Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=04bd3e47-e5f7-44c9-8a51-b05f966059b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.804190743Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3583329d-cfb4-4fe8-a2f0-b5942af831dc name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.81164894Z" level=info msg="Creating container: default/busybox/busybox" id=f7950075-3f2f-4e04-958e-e4d1ef04e6c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.811762077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.818416736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.819055019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.837035348Z" level=info msg="Created container 84507c5d4f38e855d6992a10869890e854393f441ba343c04803085581809774: default/busybox/busybox" id=f7950075-3f2f-4e04-958e-e4d1ef04e6c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.838079461Z" level=info msg="Starting container: 84507c5d4f38e855d6992a10869890e854393f441ba343c04803085581809774" id=66c9222c-f520-4cb1-9ff9-f8ef0c9da6e1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 00:11:27 addons-568394 crio[826]: time="2025-12-22T00:11:27.83981345Z" level=info msg="Started container" PID=5096 containerID=84507c5d4f38e855d6992a10869890e854393f441ba343c04803085581809774 description=default/busybox/busybox id=66c9222c-f520-4cb1-9ff9-f8ef0c9da6e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=25298aed1928d111070a44553022c3dba3258e33f9599d156f4fef52fc4c14d9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	84507c5d4f38e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   25298aed1928d       busybox                                     default
	ba6d2a10935cf       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 seconds ago       Running             csi-snapshotter                          0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	c21e1cad727bb       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          15 seconds ago       Running             csi-provisioner                          0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	4a3875b2cbd08       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            17 seconds ago       Running             liveness-probe                           0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	253f713cf37c0       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           18 seconds ago       Running             hostpath                                 0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	99e3eb4d57897       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                19 seconds ago       Running             node-driver-registrar                    0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	09131f4bbc07b       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             20 seconds ago       Running             controller                               0                   c0ab02560089d       ingress-nginx-controller-85d4c799dd-8ck2d   ingress-nginx
	27eddd74d2eec       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 27 seconds ago       Running             gcp-auth                                 0                   971c90ee9782d       gcp-auth-78565c9fb4-72kzj                   gcp-auth
	0c918dcef910a       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             27 seconds ago       Exited              patch                                    3                   716b14dc23577       gcp-auth-certs-patch-qz848                  gcp-auth
	f19b94452f671       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            30 seconds ago       Running             gadget                                   0                   0d3e34333840d       gadget-5ghgc                                gadget
	aebcc07cf8334       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             32 seconds ago       Exited              patch                                    2                   5e6dfe9b50817       ingress-nginx-admission-patch-vgv9k         ingress-nginx
	86a55a63efb3c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              33 seconds ago       Running             registry-proxy                           0                   372d3e8a0451e       registry-proxy-nsh85                        kube-system
	71380ae3b449a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   37 seconds ago       Running             csi-external-health-monitor-controller   0                   17eb3c21dbb45       csi-hostpathplugin-26hr6                    kube-system
	fc3fdb6c93d05       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     38 seconds ago       Running             nvidia-device-plugin-ctr                 0                   ca826af535a34       nvidia-device-plugin-daemonset-j6xlv        kube-system
	c803faf1630b1       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              42 seconds ago       Running             csi-resizer                              0                   5da8f766f6bec       csi-hostpath-resizer-0                      kube-system
	ad2b295ef64de       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   44 seconds ago       Exited              create                                   0                   75a67ca4886be       ingress-nginx-admission-create-d4zxr        ingress-nginx
	0e06647d0f9bc       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             44 seconds ago       Running             local-path-provisioner                   0                   cf31b2b38c46b       local-path-provisioner-648f6765c9-csq79     local-path-storage
	e4b23bf26fa66       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   45 seconds ago       Exited              create                                   0                   558170e4c55ca       gcp-auth-certs-create-zf9cf                 gcp-auth
	1036aaa1604dc       docker.io/marcnuri/yakd@sha256:0b7e831df7fe4ad1c8c56a736a8d66bd86e243f6777d3c512ead47199d8fbe1a                                              46 seconds ago       Running             yakd                                     0                   b692485a18f65       yakd-dashboard-6654c87f9b-t69qw             yakd-dashboard
	234210792d937       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             49 seconds ago       Running             csi-attacher                             0                   3444b4de34f78       csi-hostpath-attacher-0                     kube-system
	c0562ac57cc54       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      51 seconds ago       Running             volume-snapshot-controller               0                   cc05c52d1f0a6       snapshot-controller-7d9fbc56b8-bkzz5        kube-system
	efde6ed9bfd43       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               51 seconds ago       Running             cloud-spanner-emulator                   0                   5a70916445b54       cloud-spanner-emulator-85df47b6f4-f487h     default
	acf18d680a9bd       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      56 seconds ago       Running             volume-snapshot-controller               0                   468ea053da588       snapshot-controller-7d9fbc56b8-dh27b        kube-system
	e8ebac07e8c89       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           57 seconds ago       Running             registry                                 0                   00973a74f6add       registry-6b586f9694-g9bl2                   kube-system
	8f1fd9842de20       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               59 seconds ago       Running             minikube-ingress-dns                     0                   253f71bef8e56       kube-ingress-dns-minikube                   kube-system
	85e4e89412a1d       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   63385695633dc       metrics-server-85b7d694d7-gdkf4             kube-system
	7af29b5bafd6d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   5596da7043b7d       storage-provisioner                         kube-system
	41f5a4e6951c9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   9d8d64c37a8e6       coredns-66bc5c9577-6hqzq                    kube-system
	2d9448f792111       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           About a minute ago   Running             kindnet-cni                              0                   1e424b7a993ad       kindnet-qsbwv                               kube-system
	9f23e3541811b       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                                                             About a minute ago   Running             kube-proxy                               0                   e4c991bbca414       kube-proxy-7rf5z                            kube-system
	dd0d9f62338da       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             About a minute ago   Running             etcd                                     0                   7101563766f20       etcd-addons-568394                          kube-system
	b917df9a6d567       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                                                             About a minute ago   Running             kube-scheduler                           0                   af4cea1302c88       kube-scheduler-addons-568394                kube-system
	12de3313f756f       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                                                             About a minute ago   Running             kube-controller-manager                  0                   ee6aa3d61e3e8       kube-controller-manager-addons-568394       kube-system
	6f8ad0683d747       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                                                             About a minute ago   Running             kube-apiserver                           0                   1a37bbe02959f       kube-apiserver-addons-568394                kube-system
	
	
	==> coredns [41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3] <==
	[INFO] 10.244.0.15:60423 - 62585 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000110619s
	[INFO] 10.244.0.15:60423 - 21356 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002175634s
	[INFO] 10.244.0.15:60423 - 7819 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002215345s
	[INFO] 10.244.0.15:60423 - 12153 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000114048s
	[INFO] 10.244.0.15:60423 - 63023 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000135856s
	[INFO] 10.244.0.15:44607 - 58889 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158937s
	[INFO] 10.244.0.15:44607 - 59359 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160086s
	[INFO] 10.244.0.15:44963 - 51559 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000111398s
	[INFO] 10.244.0.15:44963 - 51370 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000173279s
	[INFO] 10.244.0.15:57590 - 59680 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094742s
	[INFO] 10.244.0.15:57590 - 59467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000161291s
	[INFO] 10.244.0.15:34870 - 14412 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001560989s
	[INFO] 10.244.0.15:34870 - 14215 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001602038s
	[INFO] 10.244.0.15:52948 - 1241 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012944s
	[INFO] 10.244.0.15:52948 - 862 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096235s
	[INFO] 10.244.0.20:35571 - 43331 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181213s
	[INFO] 10.244.0.20:47221 - 1554 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000121949s
	[INFO] 10.244.0.20:46207 - 28695 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135979s
	[INFO] 10.244.0.20:54819 - 7942 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084239s
	[INFO] 10.244.0.20:36404 - 64542 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139262s
	[INFO] 10.244.0.20:56954 - 53259 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090796s
	[INFO] 10.244.0.20:40496 - 22447 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001875926s
	[INFO] 10.244.0.20:45674 - 64836 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001699479s
	[INFO] 10.244.0.20:46446 - 50692 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003444799s
	[INFO] 10.244.0.20:34162 - 26165 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001584931s
	
	
	==> describe nodes <==
	Name:               addons-568394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-568394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=addons-568394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T00_10_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-568394
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-568394"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 00:09:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-568394
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 00:11:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 00:11:14 +0000   Mon, 22 Dec 2025 00:09:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 00:11:14 +0000   Mon, 22 Dec 2025 00:09:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 00:11:14 +0000   Mon, 22 Dec 2025 00:09:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 00:11:14 +0000   Mon, 22 Dec 2025 00:10:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-568394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                b8b7ef60-15bb-4f25-89c0-52633a96bb5d
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-85df47b6f4-f487h      0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  gadget                      gadget-5ghgc                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  gcp-auth                    gcp-auth-78565c9fb4-72kzj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-8ck2d    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         82s
	  kube-system                 coredns-66bc5c9577-6hqzq                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     89s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 csi-hostpathplugin-26hr6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 etcd-addons-568394                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         97s
	  kube-system                 kindnet-qsbwv                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-addons-568394                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-addons-568394        200m (10%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-7rf5z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-addons-568394                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 metrics-server-85b7d694d7-gdkf4              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         83s
	  kube-system                 nvidia-device-plugin-daemonset-j6xlv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 registry-6b586f9694-g9bl2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 registry-creds-764b6fb674-nbrkf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 registry-proxy-nsh85                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 snapshot-controller-7d9fbc56b8-bkzz5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 snapshot-controller-7d9fbc56b8-dh27b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  local-path-storage          local-path-provisioner-648f6765c9-csq79      0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-t69qw              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 87s                  kube-proxy       
	  Normal   Starting                 103s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 103s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  103s (x9 over 103s)  kubelet          Node addons-568394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node addons-568394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node addons-568394 status is now: NodeHasSufficientPID
	  Normal   Starting                 95s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 95s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  95s                  kubelet          Node addons-568394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s                  kubelet          Node addons-568394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s                  kubelet          Node addons-568394 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           90s                  node-controller  Node addons-568394 event: Registered Node addons-568394 in Controller
	  Normal   NodeReady                74s                  kubelet          Node addons-568394 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec21 22:02] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f] <==
	{"level":"warn","ts":"2025-12-22T00:09:58.145273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.172119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.197590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.227372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.262394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.282341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.306487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.340718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.365593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.388344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.416364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.444493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.487288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.505191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.547633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.584222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.612396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.629971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:09:58.781512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:16.406553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:16.420954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:37.503805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:37.558089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:37.582733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T00:10:37.604835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40564","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [27eddd74d2eecf79b0ba3e1b77abc25315582f513f8986e552f5f6e7ca7e0f55] <==
	2025/12/22 00:11:09 GCP Auth Webhook started!
	2025/12/22 00:11:25 Ready to marshal response ...
	2025/12/22 00:11:25 Ready to write response ...
	2025/12/22 00:11:25 Ready to marshal response ...
	2025/12/22 00:11:25 Ready to write response ...
	2025/12/22 00:11:25 Ready to marshal response ...
	2025/12/22 00:11:25 Ready to write response ...
	
	
	==> kernel <==
	 00:11:37 up 1 day,  6:54,  0 user,  load average: 4.44, 2.82, 1.95
	Linux addons-568394 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e] <==
	I1222 00:10:12.623611       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T00:10:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 00:10:12.826728       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 00:10:12.826745       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 00:10:12.826754       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 00:10:12.827055       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1222 00:10:13.029485       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 00:10:13.029896       1 metrics.go:72] Registering metrics
	I1222 00:10:13.030040       1 controller.go:711] "Syncing nftables rules"
	I1222 00:10:22.827927       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:10:22.828011       1 main.go:301] handling current node
	I1222 00:10:32.825449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:10:32.825526       1 main.go:301] handling current node
	I1222 00:10:42.825382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:10:42.825464       1 main.go:301] handling current node
	I1222 00:10:52.826140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:10:52.826191       1 main.go:301] handling current node
	I1222 00:11:02.827892       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:11:02.827954       1 main.go:301] handling current node
	I1222 00:11:12.825507       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:11:12.825537       1 main.go:301] handling current node
	I1222 00:11:22.825407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:11:22.825466       1 main.go:301] handling current node
	I1222 00:11:32.825095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1222 00:11:32.825134       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7] <==
	I1222 00:10:16.001549       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.103.79.189"}
	I1222 00:10:16.014179       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1222 00:10:16.089389       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.216.231"}
	W1222 00:10:16.406307       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1222 00:10:16.420714       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1222 00:10:19.246916       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.210.109"}
	W1222 00:10:23.487359       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.210.109:443: connect: connection refused
	E1222 00:10:23.487461       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.210.109:443: connect: connection refused" logger="UnhandledError"
	W1222 00:10:23.488858       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.210.109:443: connect: connection refused
	E1222 00:10:23.491639       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.210.109:443: connect: connection refused" logger="UnhandledError"
	W1222 00:10:23.582684       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.210.109:443: connect: connection refused
	E1222 00:10:23.582723       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.210.109:443: connect: connection refused" logger="UnhandledError"
	E1222 00:10:30.783253       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.74.117:443: connect: connection refused" logger="UnhandledError"
	W1222 00:10:30.783429       1 handler_proxy.go:99] no RequestInfo found in the context
	E1222 00:10:30.783481       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1222 00:10:30.784100       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.74.117:443: connect: connection refused" logger="UnhandledError"
	E1222 00:10:30.789967       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.74.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.74.117:443: connect: connection refused" logger="UnhandledError"
	I1222 00:10:30.918392       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1222 00:10:37.503089       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1222 00:10:37.555829       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1222 00:10:37.582283       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1222 00:10:37.602456       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	
	
	==> kube-controller-manager [12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1] <==
	I1222 00:10:07.528338       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 00:10:07.528359       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1222 00:10:07.528368       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1222 00:10:07.528702       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1222 00:10:07.528768       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1222 00:10:07.528841       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-568394"
	I1222 00:10:07.528874       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1222 00:10:07.529513       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1222 00:10:07.529822       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1222 00:10:07.529954       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1222 00:10:07.529973       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1222 00:10:07.530100       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1222 00:10:07.530365       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1222 00:10:07.530748       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 00:10:07.530809       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1222 00:10:07.530809       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1222 00:10:07.531339       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1222 00:10:07.534926       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 00:10:07.536438       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1222 00:10:27.532607       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1222 00:10:37.493317       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1222 00:10:37.493397       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1222 00:10:37.533740       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1222 00:10:37.594452       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 00:10:37.634808       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634] <==
	I1222 00:10:09.574751       1 server_linux.go:53] "Using iptables proxy"
	I1222 00:10:09.731105       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 00:10:09.831386       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 00:10:09.831416       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1222 00:10:09.831485       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 00:10:09.867023       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 00:10:09.867076       1 server_linux.go:132] "Using iptables Proxier"
	I1222 00:10:09.932740       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 00:10:09.933041       1 server.go:527] "Version info" version="v1.34.3"
	I1222 00:10:09.933064       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 00:10:09.956460       1 config.go:200] "Starting service config controller"
	I1222 00:10:09.962515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 00:10:09.962590       1 config.go:106] "Starting endpoint slice config controller"
	I1222 00:10:09.962597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 00:10:09.962609       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 00:10:09.962613       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 00:10:09.963261       1 config.go:309] "Starting node config controller"
	I1222 00:10:09.963269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 00:10:09.963275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 00:10:10.063143       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 00:10:10.063180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1222 00:10:10.063415       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f] <==
	I1222 00:10:00.804239       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 00:10:00.806728       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 00:10:00.806822       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 00:10:00.807744       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 00:10:00.807825       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1222 00:10:00.810886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1222 00:10:00.817570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1222 00:10:00.818075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1222 00:10:00.818177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1222 00:10:00.818254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1222 00:10:00.818350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1222 00:10:00.818417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 00:10:00.818490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1222 00:10:00.818565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1222 00:10:00.818637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1222 00:10:00.818705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1222 00:10:00.818772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1222 00:10:00.823460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1222 00:10:00.823587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1222 00:10:00.823693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1222 00:10:00.823874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1222 00:10:00.824013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 00:10:00.824115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1222 00:10:00.824208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1222 00:10:02.107353       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 22 00:11:04 addons-568394 kubelet[1282]: I1222 00:11:04.925167    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nsh85" secret="" err="secret \"gcp-auth\" not found"
	Dec 22 00:11:05 addons-568394 kubelet[1282]: I1222 00:11:05.931012    1282 scope.go:117] "RemoveContainer" containerID="8ddd0d57c0ef4655ee861d96dd0a8b7d1c3f337b48c51998e1c52fcbe361976e"
	Dec 22 00:11:05 addons-568394 kubelet[1282]: I1222 00:11:05.950084    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-nsh85" podStartSLOduration=4.685843775 podStartE2EDuration="42.950057604s" podCreationTimestamp="2025-12-22 00:10:23 +0000 UTC" firstStartedPulling="2025-12-22 00:10:25.030895683 +0000 UTC m=+22.776791974" lastFinishedPulling="2025-12-22 00:11:03.295109512 +0000 UTC m=+61.041005803" observedRunningTime="2025-12-22 00:11:03.951589525 +0000 UTC m=+61.697485816" watchObservedRunningTime="2025-12-22 00:11:05.950057604 +0000 UTC m=+63.695953895"
	Dec 22 00:11:07 addons-568394 kubelet[1282]: I1222 00:11:07.019320    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-5ghgc" podStartSLOduration=17.256153872 podStartE2EDuration="53.019300532s" podCreationTimestamp="2025-12-22 00:10:14 +0000 UTC" firstStartedPulling="2025-12-22 00:10:30.850163623 +0000 UTC m=+28.596059906" lastFinishedPulling="2025-12-22 00:11:06.613310284 +0000 UTC m=+64.359206566" observedRunningTime="2025-12-22 00:11:06.98627545 +0000 UTC m=+64.732171766" watchObservedRunningTime="2025-12-22 00:11:07.019300532 +0000 UTC m=+64.765196814"
	Dec 22 00:11:07 addons-568394 kubelet[1282]: I1222 00:11:07.138431    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxppb\" (UniqueName: \"kubernetes.io/projected/190eca60-12a0-41c8-9341-4066353f118a-kube-api-access-mxppb\") pod \"190eca60-12a0-41c8-9341-4066353f118a\" (UID: \"190eca60-12a0-41c8-9341-4066353f118a\") "
	Dec 22 00:11:07 addons-568394 kubelet[1282]: I1222 00:11:07.143460    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190eca60-12a0-41c8-9341-4066353f118a-kube-api-access-mxppb" (OuterVolumeSpecName: "kube-api-access-mxppb") pod "190eca60-12a0-41c8-9341-4066353f118a" (UID: "190eca60-12a0-41c8-9341-4066353f118a"). InnerVolumeSpecName "kube-api-access-mxppb". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 22 00:11:07 addons-568394 kubelet[1282]: I1222 00:11:07.238958    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxppb\" (UniqueName: \"kubernetes.io/projected/190eca60-12a0-41c8-9341-4066353f118a-kube-api-access-mxppb\") on node \"addons-568394\" DevicePath \"\""
	Dec 22 00:11:07 addons-568394 kubelet[1282]: I1222 00:11:07.955505    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6dfe9b508173374eefcc470cfa40583bc0e212d2f550d799b52bccf8fa24c0"
	Dec 22 00:11:09 addons-568394 kubelet[1282]: I1222 00:11:09.449876    1282 scope.go:117] "RemoveContainer" containerID="dc101fdf0e70a3a3c38567f1a4c990ec00c03753d53a18d7fc06cff07f066826"
	Dec 22 00:11:09 addons-568394 kubelet[1282]: I1222 00:11:09.974174    1282 scope.go:117] "RemoveContainer" containerID="dc101fdf0e70a3a3c38567f1a4c990ec00c03753d53a18d7fc06cff07f066826"
	Dec 22 00:11:09 addons-568394 kubelet[1282]: I1222 00:11:09.992405    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-72kzj" podStartSLOduration=37.202385929 podStartE2EDuration="50.99238622s" podCreationTimestamp="2025-12-22 00:10:19 +0000 UTC" firstStartedPulling="2025-12-22 00:10:55.80722733 +0000 UTC m=+53.553123613" lastFinishedPulling="2025-12-22 00:11:09.597227621 +0000 UTC m=+67.343123904" observedRunningTime="2025-12-22 00:11:09.99156208 +0000 UTC m=+67.737458379" watchObservedRunningTime="2025-12-22 00:11:09.99238622 +0000 UTC m=+67.738282503"
	Dec 22 00:11:11 addons-568394 kubelet[1282]: I1222 00:11:11.588880    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-897qn\" (UniqueName: \"kubernetes.io/projected/a0173f68-4c01-4e04-952f-a8254d3b54f1-kube-api-access-897qn\") pod \"a0173f68-4c01-4e04-952f-a8254d3b54f1\" (UID: \"a0173f68-4c01-4e04-952f-a8254d3b54f1\") "
	Dec 22 00:11:11 addons-568394 kubelet[1282]: I1222 00:11:11.595585    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0173f68-4c01-4e04-952f-a8254d3b54f1-kube-api-access-897qn" (OuterVolumeSpecName: "kube-api-access-897qn") pod "a0173f68-4c01-4e04-952f-a8254d3b54f1" (UID: "a0173f68-4c01-4e04-952f-a8254d3b54f1"). InnerVolumeSpecName "kube-api-access-897qn". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 22 00:11:11 addons-568394 kubelet[1282]: I1222 00:11:11.690340    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-897qn\" (UniqueName: \"kubernetes.io/projected/a0173f68-4c01-4e04-952f-a8254d3b54f1-kube-api-access-897qn\") on node \"addons-568394\" DevicePath \"\""
	Dec 22 00:11:12 addons-568394 kubelet[1282]: I1222 00:11:12.039963    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716b14dc2357769fde18b1a75cf8609a34e13bf3a5fec10e63546fb8b01d8fc6"
	Dec 22 00:11:17 addons-568394 kubelet[1282]: I1222 00:11:17.078957    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-8ck2d" podStartSLOduration=41.458302897 podStartE2EDuration="1m2.0789425s" podCreationTimestamp="2025-12-22 00:10:15 +0000 UTC" firstStartedPulling="2025-12-22 00:10:55.87644501 +0000 UTC m=+53.622341301" lastFinishedPulling="2025-12-22 00:11:16.497084621 +0000 UTC m=+74.242980904" observedRunningTime="2025-12-22 00:11:17.077927793 +0000 UTC m=+74.823824076" watchObservedRunningTime="2025-12-22 00:11:17.0789425 +0000 UTC m=+74.824838791"
	Dec 22 00:11:19 addons-568394 kubelet[1282]: I1222 00:11:19.606551    1282 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 22 00:11:19 addons-568394 kubelet[1282]: I1222 00:11:19.606598    1282 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 22 00:11:23 addons-568394 kubelet[1282]: I1222 00:11:23.120404    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-26hr6" podStartSLOduration=2.506143148 podStartE2EDuration="1m0.120385981s" podCreationTimestamp="2025-12-22 00:10:23 +0000 UTC" firstStartedPulling="2025-12-22 00:10:25.030564673 +0000 UTC m=+22.776460956" lastFinishedPulling="2025-12-22 00:11:22.644807515 +0000 UTC m=+80.390703789" observedRunningTime="2025-12-22 00:11:23.117806716 +0000 UTC m=+80.863703015" watchObservedRunningTime="2025-12-22 00:11:23.120385981 +0000 UTC m=+80.866282272"
	Dec 22 00:11:24 addons-568394 kubelet[1282]: I1222 00:11:24.453004    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c19a6e-e284-4a2c-844a-3e25d082862d" path="/var/lib/kubelet/pods/c8c19a6e-e284-4a2c-844a-3e25d082862d/volumes"
	Dec 22 00:11:25 addons-568394 kubelet[1282]: I1222 00:11:25.725853    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/112ce16b-8ebc-4364-9aa5-006a80b00eb5-gcp-creds\") pod \"busybox\" (UID: \"112ce16b-8ebc-4364-9aa5-006a80b00eb5\") " pod="default/busybox"
	Dec 22 00:11:25 addons-568394 kubelet[1282]: I1222 00:11:25.725942    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg5qh\" (UniqueName: \"kubernetes.io/projected/112ce16b-8ebc-4364-9aa5-006a80b00eb5-kube-api-access-vg5qh\") pod \"busybox\" (UID: \"112ce16b-8ebc-4364-9aa5-006a80b00eb5\") " pod="default/busybox"
	Dec 22 00:11:27 addons-568394 kubelet[1282]: E1222 00:11:27.442982    1282 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 22 00:11:27 addons-568394 kubelet[1282]: E1222 00:11:27.443086    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff-gcr-creds podName:e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff nodeName:}" failed. No retries permitted until 2025-12-22 00:12:31.443068057 +0000 UTC m=+149.188964340 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff-gcr-creds") pod "registry-creds-764b6fb674-nbrkf" (UID: "e6e4b1dc-a6cc-4fe4-a98e-7d7b9b9e08ff") : secret "registry-creds-gcr" not found
	Dec 22 00:11:28 addons-568394 kubelet[1282]: I1222 00:11:28.131403    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.304584005 podStartE2EDuration="3.131382594s" podCreationTimestamp="2025-12-22 00:11:25 +0000 UTC" firstStartedPulling="2025-12-22 00:11:25.976383986 +0000 UTC m=+83.722280269" lastFinishedPulling="2025-12-22 00:11:27.803182567 +0000 UTC m=+85.549078858" observedRunningTime="2025-12-22 00:11:28.129866297 +0000 UTC m=+85.875762588" watchObservedRunningTime="2025-12-22 00:11:28.131382594 +0000 UTC m=+85.877278885"
	
	
	==> storage-provisioner [7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5] <==
	W1222 00:11:12.803744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:14.807325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:14.816652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:16.819442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:16.826864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:18.830507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:18.835162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:20.841361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:20.851194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:22.854761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:22.859150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:24.866545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:24.871908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:26.875402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:26.883893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:28.887294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:28.894403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:30.898333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:30.902992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:32.906155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:32.910205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:34.916247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:34.928695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:36.932455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 00:11:36.937963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-568394 -n addons-568394
helpers_test.go:270: (dbg) Run:  kubectl --context addons-568394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-patch-qz848 ingress-nginx-admission-create-d4zxr ingress-nginx-admission-patch-vgv9k registry-creds-764b6fb674-nbrkf
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-568394 describe pod gcp-auth-certs-patch-qz848 ingress-nginx-admission-create-d4zxr ingress-nginx-admission-patch-vgv9k registry-creds-764b6fb674-nbrkf
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-568394 describe pod gcp-auth-certs-patch-qz848 ingress-nginx-admission-create-d4zxr ingress-nginx-admission-patch-vgv9k registry-creds-764b6fb674-nbrkf: exit status 1 (87.658829ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-qz848" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-d4zxr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vgv9k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-nbrkf" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-568394 describe pod gcp-auth-certs-patch-qz848 ingress-nginx-admission-create-d4zxr ingress-nginx-admission-patch-vgv9k registry-creds-764b6fb674-nbrkf: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable headlamp --alsologtostderr -v=1: exit status 11 (264.43544ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:11:38.487314 1888905 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:11:38.488622 1888905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:38.488671 1888905 out.go:374] Setting ErrFile to fd 2...
	I1222 00:11:38.488696 1888905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:11:38.488978 1888905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:11:38.489363 1888905 mustload.go:66] Loading cluster: addons-568394
	I1222 00:11:38.489778 1888905 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:38.489829 1888905 addons.go:622] checking whether the cluster is paused
	I1222 00:11:38.489963 1888905 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:11:38.489997 1888905 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:11:38.490560 1888905 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:11:38.518285 1888905 ssh_runner.go:195] Run: systemctl --version
	I1222 00:11:38.518343 1888905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:11:38.536124 1888905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:11:38.632259 1888905 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:11:38.632358 1888905 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:11:38.661456 1888905 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:11:38.661480 1888905 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:11:38.661486 1888905 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:11:38.661490 1888905 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:11:38.661493 1888905 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:11:38.661497 1888905 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:11:38.661500 1888905 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:11:38.661504 1888905 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:11:38.661507 1888905 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:11:38.661514 1888905 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:11:38.661517 1888905 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:11:38.661521 1888905 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:11:38.661524 1888905 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:11:38.661528 1888905 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:11:38.661532 1888905 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:11:38.661541 1888905 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:11:38.661545 1888905 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:11:38.661550 1888905 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:11:38.661554 1888905 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:11:38.661557 1888905 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:11:38.661562 1888905 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:11:38.661572 1888905 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:11:38.661575 1888905 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:11:38.661583 1888905 cri.go:96] found id: ""
	I1222 00:11:38.661635 1888905 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:11:38.677290 1888905 out.go:203] 
	W1222 00:11:38.680332 1888905 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:11:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:11:38.680371 1888905 out.go:285] * 
	* 
	W1222 00:11:38.698321 1888905 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:11:38.701497 1888905 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-85df47b6f4-f487h" [42b35dd6-8ddd-4a2f-8d6f-9d0d8058eadb] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004518733s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (342.032603ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:12:39.819922 1890645 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:12:39.821736 1890645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:39.821762 1890645 out.go:374] Setting ErrFile to fd 2...
	I1222 00:12:39.821768 1890645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:39.822030 1890645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:12:39.822313 1890645 mustload.go:66] Loading cluster: addons-568394
	I1222 00:12:39.824129 1890645 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:39.824159 1890645 addons.go:622] checking whether the cluster is paused
	I1222 00:12:39.824280 1890645 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:39.824291 1890645 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:12:39.824940 1890645 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:12:39.843568 1890645 ssh_runner.go:195] Run: systemctl --version
	I1222 00:12:39.843627 1890645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:12:39.863552 1890645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:12:39.981692 1890645 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:12:39.981798 1890645 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:12:40.060407 1890645 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:12:40.060433 1890645 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:12:40.060438 1890645 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:12:40.060442 1890645 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:12:40.060446 1890645 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:12:40.060455 1890645 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:12:40.060459 1890645 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:12:40.060462 1890645 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:12:40.060466 1890645 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:12:40.060473 1890645 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:12:40.060476 1890645 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:12:40.060479 1890645 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:12:40.060482 1890645 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:12:40.060486 1890645 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:12:40.060489 1890645 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:12:40.060495 1890645 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:12:40.060498 1890645 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:12:40.060502 1890645 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:12:40.060506 1890645 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:12:40.060509 1890645 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:12:40.060514 1890645 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:12:40.060517 1890645 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:12:40.060521 1890645 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:12:40.060524 1890645 cri.go:96] found id: ""
	I1222 00:12:40.060586 1890645 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:12:40.083904 1890645 out.go:203] 
	W1222 00:12:40.087085 1890645 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:12:40.087178 1890645 out.go:285] * 
	* 
	W1222 00:12:40.107271 1890645 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:12:40.111402 1890645 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-568394 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-568394 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568394 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [74974e1a-5614-4aa4-950e-8b1a5bb00bbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [74974e1a-5614-4aa4-950e-8b1a5bb00bbb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [74974e1a-5614-4aa4-950e-8b1a5bb00bbb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004217103s
addons_test.go:969: (dbg) Run:  kubectl --context addons-568394 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 ssh "cat /opt/local-path-provisioner/pvc-3ef0a689-774b-47cf-ad57-4a455b94d11d_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-568394 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-568394 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (275.703038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:12:34.542431 1890527 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:12:34.543266 1890527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:34.543282 1890527 out.go:374] Setting ErrFile to fd 2...
	I1222 00:12:34.543288 1890527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:34.543548 1890527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:12:34.543839 1890527 mustload.go:66] Loading cluster: addons-568394
	I1222 00:12:34.544193 1890527 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:34.544215 1890527 addons.go:622] checking whether the cluster is paused
	I1222 00:12:34.544318 1890527 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:34.544332 1890527 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:12:34.544839 1890527 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:12:34.563371 1890527 ssh_runner.go:195] Run: systemctl --version
	I1222 00:12:34.563429 1890527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:12:34.582253 1890527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:12:34.679961 1890527 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:12:34.680064 1890527 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:12:34.710764 1890527 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:12:34.710786 1890527 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:12:34.710791 1890527 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:12:34.710795 1890527 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:12:34.710799 1890527 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:12:34.710831 1890527 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:12:34.710834 1890527 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:12:34.710838 1890527 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:12:34.710841 1890527 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:12:34.710848 1890527 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:12:34.710852 1890527 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:12:34.710856 1890527 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:12:34.710859 1890527 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:12:34.710863 1890527 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:12:34.710866 1890527 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:12:34.710874 1890527 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:12:34.710878 1890527 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:12:34.710882 1890527 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:12:34.710889 1890527 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:12:34.710893 1890527 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:12:34.710897 1890527 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:12:34.710904 1890527 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:12:34.710907 1890527 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:12:34.710910 1890527 cri.go:96] found id: ""
	I1222 00:12:34.710969 1890527 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:12:34.728675 1890527 out.go:203] 
	W1222 00:12:34.731730 1890527 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:12:34.731760 1890527 out.go:285] * 
	* 
	W1222 00:12:34.750729 1890527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:12:34.755258 1890527 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-j6xlv" [df9fafee-fceb-4642-b06a-7f539cf2c6d8] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003366294s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (264.84878ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:12:19.893849 1890161 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:12:19.894660 1890161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:19.894705 1890161 out.go:374] Setting ErrFile to fd 2...
	I1222 00:12:19.894726 1890161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:19.895035 1890161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:12:19.895428 1890161 mustload.go:66] Loading cluster: addons-568394
	I1222 00:12:19.895846 1890161 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:19.895888 1890161 addons.go:622] checking whether the cluster is paused
	I1222 00:12:19.896030 1890161 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:19.896060 1890161 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:12:19.896599 1890161 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:12:19.915163 1890161 ssh_runner.go:195] Run: systemctl --version
	I1222 00:12:19.915215 1890161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:12:19.931858 1890161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:12:20.029341 1890161 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:12:20.029451 1890161 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:12:20.060580 1890161 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:12:20.060599 1890161 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:12:20.060604 1890161 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:12:20.060608 1890161 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:12:20.060611 1890161 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:12:20.060615 1890161 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:12:20.060618 1890161 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:12:20.060621 1890161 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:12:20.060624 1890161 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:12:20.060633 1890161 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:12:20.060636 1890161 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:12:20.060639 1890161 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:12:20.060643 1890161 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:12:20.060646 1890161 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:12:20.060649 1890161 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:12:20.060656 1890161 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:12:20.060659 1890161 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:12:20.060666 1890161 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:12:20.060669 1890161 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:12:20.060673 1890161 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:12:20.060677 1890161 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:12:20.060680 1890161 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:12:20.060684 1890161 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:12:20.060687 1890161 cri.go:96] found id: ""
	I1222 00:12:20.060740 1890161 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:12:20.076107 1890161 out.go:203] 
	W1222 00:12:20.079281 1890161 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:12:20.079304 1890161 out.go:285] * 
	* 
	W1222 00:12:20.098199 1890161 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:12:20.101244 1890161 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-t69qw" [2ae3e84e-86fe-4e42-b426-b2c49f8ee24a] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003615871s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-568394 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-568394 addons disable yakd --alsologtostderr -v=1: exit status 11 (284.197261ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:12:26.169254 1890237 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:12:26.170103 1890237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:26.170147 1890237 out.go:374] Setting ErrFile to fd 2...
	I1222 00:12:26.170167 1890237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:12:26.170440 1890237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:12:26.170742 1890237 mustload.go:66] Loading cluster: addons-568394
	I1222 00:12:26.171133 1890237 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:26.171181 1890237 addons.go:622] checking whether the cluster is paused
	I1222 00:12:26.171308 1890237 config.go:182] Loaded profile config "addons-568394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:12:26.171343 1890237 host.go:66] Checking if "addons-568394" exists ...
	I1222 00:12:26.171856 1890237 cli_runner.go:164] Run: docker container inspect addons-568394 --format={{.State.Status}}
	I1222 00:12:26.194344 1890237 ssh_runner.go:195] Run: systemctl --version
	I1222 00:12:26.194405 1890237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568394
	I1222 00:12:26.210969 1890237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38456 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/addons-568394/id_rsa Username:docker}
	I1222 00:12:26.307961 1890237 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:12:26.308048 1890237 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:12:26.348176 1890237 cri.go:96] found id: "ba6d2a10935cf5c209b6afca1ebff1e16153f74caee020cc1c366d8c843169c0"
	I1222 00:12:26.348194 1890237 cri.go:96] found id: "c21e1cad727bb07c7d8adaf0840001040d6fe117b9d467c449dc072fe8d8a4d1"
	I1222 00:12:26.348200 1890237 cri.go:96] found id: "4a3875b2cbd0869f943ca10d43814b8002bf3f27a722c25a8b8aeb8917fedd09"
	I1222 00:12:26.348204 1890237 cri.go:96] found id: "253f713cf37c0e202fd5f9ecde525c7d836e803c0b1ae14c08feea0721197845"
	I1222 00:12:26.348207 1890237 cri.go:96] found id: "99e3eb4d57897adb387eaa235259f2956a42a814b22579140f8a0b9a61f9eb0b"
	I1222 00:12:26.348211 1890237 cri.go:96] found id: "86a55a63efb3c2f34476189389379b8f564f68cfd6cd65dcb91435b26af75187"
	I1222 00:12:26.348215 1890237 cri.go:96] found id: "71380ae3b449af067f6561c7e1dbf5ebb84efeea5ba1ba03afe552822e3ea687"
	I1222 00:12:26.348218 1890237 cri.go:96] found id: "fc3fdb6c93d055b034a341bcdf45e15e6b8aad7558d4886133073027b4e9fd27"
	I1222 00:12:26.348222 1890237 cri.go:96] found id: "c803faf1630b1d419c5b039edf67e66d277fdac70b5ab0dd4b28e2e27e9273d2"
	I1222 00:12:26.348228 1890237 cri.go:96] found id: "234210792d93704bfd744146979af7d1cb2de9d547036c13af38865c63ae78f4"
	I1222 00:12:26.348231 1890237 cri.go:96] found id: "c0562ac57cc54bb92427a7d942fce11e9cc59c28e50e5348d38d5e9963b4d0da"
	I1222 00:12:26.348237 1890237 cri.go:96] found id: "acf18d680a9bd3fe6ad1b8e70b263ac7a73122cbd489dba6772d620a44ee7fd2"
	I1222 00:12:26.348241 1890237 cri.go:96] found id: "e8ebac07e8c89f0946cc1650399e5a41eebeac91c94f89028f0d4534dd8a38e6"
	I1222 00:12:26.348244 1890237 cri.go:96] found id: "8f1fd9842de2015ea1ac658e8cce81fbe85836a5411f38b96e17a5d61884d58b"
	I1222 00:12:26.348247 1890237 cri.go:96] found id: "85e4e89412a1def57a74b66eee15c67aeaab1332c056e3270bc4836839b5e711"
	I1222 00:12:26.348252 1890237 cri.go:96] found id: "7af29b5bafd6d8c22c9fba0381709cb89ac936e00dea9bedd63976c1bdd0a2b5"
	I1222 00:12:26.348255 1890237 cri.go:96] found id: "41f5a4e6951c946b83a2de9296fe163ce64a34e0bc0af88ad648e80309c581d3"
	I1222 00:12:26.348271 1890237 cri.go:96] found id: "2d9448f792111b8aaae12e7101f782d4d6557b5760d87e5891889c0490802c1e"
	I1222 00:12:26.348274 1890237 cri.go:96] found id: "9f23e3541811b1ee98179ac35870ed6c42fc96e29b69c39aece0e9df0b595634"
	I1222 00:12:26.348277 1890237 cri.go:96] found id: "dd0d9f62338da816b09a7ecdc9101d2cfde77f1aeb1b4686f80bfa2bedc5143f"
	I1222 00:12:26.348282 1890237 cri.go:96] found id: "b917df9a6d56718dc4cb4a463b67e63df7c02d4d65099542b77dace1485b028f"
	I1222 00:12:26.348285 1890237 cri.go:96] found id: "12de3313f756fed0abb75c6b73c5a9d23651fbd8557a6dea4e7aafd3b2257fb1"
	I1222 00:12:26.348289 1890237 cri.go:96] found id: "6f8ad0683d7475cc2aead12a28879a9fade4ba732e4d0eeed70e09371952d6f7"
	I1222 00:12:26.348291 1890237 cri.go:96] found id: ""
	I1222 00:12:26.348348 1890237 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 00:12:26.365795 1890237 out.go:203] 
	W1222 00:12:26.368983 1890237 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:12:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 00:12:26.369012 1890237 out.go:285] * 
	* 
	W1222 00:12:26.387149 1890237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:12:26.390502 1890237 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-568394 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (501.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1222 00:19:09.609515 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:21:25.759752 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:21:53.457393 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:43.841625 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:43.846995 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:43.857427 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:43.877764 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:43.918119 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:43.998502 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:44.158956 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:44.479552 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:45.121252 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:46.401452 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:48.961732 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:22:54.082019 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:23:04.322484 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:23:24.803177 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:24:05.763868 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:25:27.685483 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:26:25.759096 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m20.002587334s)

                                                
                                                
-- stdout --
	* [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Found network options:
	  - HTTP_PROXY=localhost:39449
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:39449 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-837733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-837733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000984278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001275115s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001275115s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 6 (322.562949ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 00:26:55.139840 1915220 status.go:458] kubeconfig endpoint: get endpoint: "functional-837733" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-885873 image save kicbase/echo-server:functional-885873 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image rm kicbase/echo-server:functional-885873 --alsologtostderr                                                                        │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                                │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                                │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image save --daemon kicbase/echo-server:functional-885873 --alsologtostderr                                                             │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/test/nested/copy/1881219/hosts                                                                                        │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/1881219.pem                                                                                                 │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /usr/share/ca-certificates/1881219.pem                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/18812192.pem                                                                                                │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /usr/share/ca-certificates/18812192.pem                                                                                    │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format short --alsologtostderr                                                                                               │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format yaml --alsologtostderr                                                                                                │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh pgrep buildkitd                                                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ image          │ functional-885873 image ls --format json --alsologtostderr                                                                                                │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format table --alsologtostderr                                                                                               │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr                                                    │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                                   │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                                   │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                                   │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                                │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ delete         │ -p functional-885873                                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ start          │ -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:18:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:18:34.852046 1909635 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:18:34.852146 1909635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:18:34.852192 1909635 out.go:374] Setting ErrFile to fd 2...
	I1222 00:18:34.852197 1909635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:18:34.852436 1909635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:18:34.852840 1909635 out.go:368] Setting JSON to false
	I1222 00:18:34.853671 1909635 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":111664,"bootTime":1766251051,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:18:34.853725 1909635 start.go:143] virtualization:  
	I1222 00:18:34.856246 1909635 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:18:34.858633 1909635 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:18:34.858705 1909635 notify.go:221] Checking for updates...
	I1222 00:18:34.863788 1909635 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:18:34.866197 1909635 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:18:34.868744 1909635 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:18:34.871196 1909635 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:18:34.873698 1909635 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:18:34.876584 1909635 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:18:34.907913 1909635 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:18:34.908018 1909635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:18:34.972132 1909635 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-22 00:18:34.963221726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:18:34.972225 1909635 docker.go:319] overlay module found
	I1222 00:18:34.975186 1909635 out.go:179] * Using the docker driver based on user configuration
	I1222 00:18:34.977862 1909635 start.go:309] selected driver: docker
	I1222 00:18:34.977873 1909635 start.go:928] validating driver "docker" against <nil>
	I1222 00:18:34.977885 1909635 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:18:34.978594 1909635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:18:35.046243 1909635 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-22 00:18:35.035301664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:18:35.046390 1909635 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:18:35.046603 1909635 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:18:35.049511 1909635 out.go:179] * Using Docker driver with root privileges
	I1222 00:18:35.052172 1909635 cni.go:84] Creating CNI manager for ""
	I1222 00:18:35.052234 1909635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:18:35.052242 1909635 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 00:18:35.052319 1909635 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:18:35.057156 1909635 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:18:35.059802 1909635 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:18:35.062666 1909635 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:18:35.065451 1909635 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:18:35.065490 1909635 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:18:35.065499 1909635 cache.go:65] Caching tarball of preloaded images
	I1222 00:18:35.065537 1909635 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:18:35.065584 1909635 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:18:35.065593 1909635 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:18:35.065925 1909635 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:18:35.065944 1909635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json: {Name:mk9e94194b847638ca218611da410475a1d17a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:18:35.085716 1909635 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:18:35.085729 1909635 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:18:35.085751 1909635 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:18:35.085786 1909635 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:18:35.085914 1909635 start.go:364] duration metric: took 111.275µs to acquireMachinesLock for "functional-837733"
	I1222 00:18:35.085942 1909635 start.go:93] Provisioning new machine with config: &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:18:35.086014 1909635 start.go:125] createHost starting for "" (driver="docker")
	I1222 00:18:35.089387 1909635 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1222 00:18:35.089714 1909635 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:39449 to docker env.
	I1222 00:18:35.089744 1909635 start.go:159] libmachine.API.Create for "functional-837733" (driver="docker")
	I1222 00:18:35.089781 1909635 client.go:173] LocalClient.Create starting
	I1222 00:18:35.089865 1909635 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 00:18:35.089901 1909635 main.go:144] libmachine: Decoding PEM data...
	I1222 00:18:35.089921 1909635 main.go:144] libmachine: Parsing certificate...
	I1222 00:18:35.089978 1909635 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 00:18:35.089995 1909635 main.go:144] libmachine: Decoding PEM data...
	I1222 00:18:35.090012 1909635 main.go:144] libmachine: Parsing certificate...
	I1222 00:18:35.090403 1909635 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 00:18:35.108460 1909635 cli_runner.go:211] docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 00:18:35.108544 1909635 network_create.go:284] running [docker network inspect functional-837733] to gather additional debugging logs...
	I1222 00:18:35.108560 1909635 cli_runner.go:164] Run: docker network inspect functional-837733
	W1222 00:18:35.124570 1909635 cli_runner.go:211] docker network inspect functional-837733 returned with exit code 1
	I1222 00:18:35.124590 1909635 network_create.go:287] error running [docker network inspect functional-837733]: docker network inspect functional-837733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-837733 not found
	I1222 00:18:35.124601 1909635 network_create.go:289] output of [docker network inspect functional-837733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-837733 not found
	
	** /stderr **
	I1222 00:18:35.124715 1909635 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:18:35.141644 1909635 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018cff20}
	I1222 00:18:35.141677 1909635 network_create.go:124] attempt to create docker network functional-837733 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1222 00:18:35.141735 1909635 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-837733 functional-837733
	I1222 00:18:35.199660 1909635 network_create.go:108] docker network functional-837733 192.168.49.0/24 created
	I1222 00:18:35.199683 1909635 kic.go:121] calculated static IP "192.168.49.2" for the "functional-837733" container
	I1222 00:18:35.199773 1909635 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 00:18:35.214747 1909635 cli_runner.go:164] Run: docker volume create functional-837733 --label name.minikube.sigs.k8s.io=functional-837733 --label created_by.minikube.sigs.k8s.io=true
	I1222 00:18:35.232476 1909635 oci.go:103] Successfully created a docker volume functional-837733
	I1222 00:18:35.232552 1909635 cli_runner.go:164] Run: docker run --rm --name functional-837733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-837733 --entrypoint /usr/bin/test -v functional-837733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 00:18:35.768407 1909635 oci.go:107] Successfully prepared a docker volume functional-837733
	I1222 00:18:35.768462 1909635 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:18:35.768471 1909635 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 00:18:35.768536 1909635 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-837733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 00:18:39.577885 1909635 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-837733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.809316771s)
	I1222 00:18:39.577907 1909635 kic.go:203] duration metric: took 3.809433599s to extract preloaded images to volume ...
	W1222 00:18:39.578043 1909635 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 00:18:39.578148 1909635 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 00:18:39.634018 1909635 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-837733 --name functional-837733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-837733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-837733 --network functional-837733 --ip 192.168.49.2 --volume functional-837733:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 00:18:39.947301 1909635 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Running}}
	I1222 00:18:39.966771 1909635 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:18:39.991037 1909635 cli_runner.go:164] Run: docker exec functional-837733 stat /var/lib/dpkg/alternatives/iptables
	I1222 00:18:40.068062 1909635 oci.go:144] the created container "functional-837733" has a running status.
	I1222 00:18:40.068084 1909635 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa...
	I1222 00:18:40.293704 1909635 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 00:18:40.313833 1909635 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:18:40.348862 1909635 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 00:18:40.348880 1909635 kic_runner.go:114] Args: [docker exec --privileged functional-837733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 00:18:40.412212 1909635 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:18:40.433530 1909635 machine.go:94] provisionDockerMachine start ...
	I1222 00:18:40.433622 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:40.457177 1909635 main.go:144] libmachine: Using SSH client type: native
	I1222 00:18:40.457530 1909635 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:18:40.457538 1909635 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:18:40.458870 1909635 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39916->127.0.0.1:38471: read: connection reset by peer
	I1222 00:18:43.594409 1909635 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:18:43.594423 1909635 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:18:43.594506 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:43.612013 1909635 main.go:144] libmachine: Using SSH client type: native
	I1222 00:18:43.612316 1909635 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:18:43.612324 1909635 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:18:43.754370 1909635 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:18:43.754440 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:43.771814 1909635 main.go:144] libmachine: Using SSH client type: native
	I1222 00:18:43.772125 1909635 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:18:43.772138 1909635 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:18:43.901614 1909635 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:18:43.901631 1909635 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:18:43.901664 1909635 ubuntu.go:190] setting up certificates
	I1222 00:18:43.901672 1909635 provision.go:84] configureAuth start
	I1222 00:18:43.901731 1909635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:18:43.919336 1909635 provision.go:143] copyHostCerts
	I1222 00:18:43.919403 1909635 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:18:43.919411 1909635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:18:43.919485 1909635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:18:43.919571 1909635 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:18:43.919575 1909635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:18:43.919598 1909635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:18:43.919670 1909635 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:18:43.919674 1909635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:18:43.919696 1909635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:18:43.919736 1909635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:18:44.053801 1909635 provision.go:177] copyRemoteCerts
	I1222 00:18:44.053853 1909635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:18:44.053892 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:44.070524 1909635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:18:44.169091 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:18:44.186453 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:18:44.203167 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:18:44.220782 1909635 provision.go:87] duration metric: took 319.088521ms to configureAuth
	I1222 00:18:44.220799 1909635 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:18:44.220986 1909635 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:18:44.221095 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:44.238066 1909635 main.go:144] libmachine: Using SSH client type: native
	I1222 00:18:44.238360 1909635 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:18:44.238370 1909635 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:18:44.504994 1909635 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:18:44.505006 1909635 machine.go:97] duration metric: took 4.071465624s to provisionDockerMachine
	I1222 00:18:44.505029 1909635 client.go:176] duration metric: took 9.415230271s to LocalClient.Create
	I1222 00:18:44.505039 1909635 start.go:167] duration metric: took 9.415296387s to libmachine.API.Create "functional-837733"
	I1222 00:18:44.505045 1909635 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:18:44.505055 1909635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:18:44.505117 1909635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:18:44.505153 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:44.522649 1909635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:18:44.617451 1909635 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:18:44.620699 1909635 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:18:44.620716 1909635 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:18:44.620726 1909635 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:18:44.620777 1909635 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:18:44.620876 1909635 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:18:44.620954 1909635 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:18:44.621006 1909635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:18:44.628964 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:18:44.647109 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:18:44.663868 1909635 start.go:296] duration metric: took 158.809336ms for postStartSetup
	I1222 00:18:44.664217 1909635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:18:44.684207 1909635 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:18:44.684484 1909635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:18:44.684522 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:44.701489 1909635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:18:44.798644 1909635 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:18:44.803658 1909635 start.go:128] duration metric: took 9.717629735s to createHost
	I1222 00:18:44.803674 1909635 start.go:83] releasing machines lock for "functional-837733", held for 9.717751216s
	I1222 00:18:44.803776 1909635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:18:44.824140 1909635 out.go:179] * Found network options:
	I1222 00:18:44.827171 1909635 out.go:179]   - HTTP_PROXY=localhost:39449
	W1222 00:18:44.830107 1909635 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1222 00:18:44.833076 1909635 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1222 00:18:44.836203 1909635 ssh_runner.go:195] Run: cat /version.json
	I1222 00:18:44.836250 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:44.836289 1909635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:18:44.836344 1909635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:18:44.855900 1909635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:18:44.855795 1909635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:18:45.045293 1909635 ssh_runner.go:195] Run: systemctl --version
	I1222 00:18:45.054686 1909635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:18:45.100154 1909635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:18:45.108875 1909635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:18:45.108961 1909635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:18:45.149218 1909635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 00:18:45.149235 1909635 start.go:496] detecting cgroup driver to use...
	I1222 00:18:45.149285 1909635 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:18:45.149388 1909635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:18:45.179587 1909635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:18:45.201195 1909635 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:18:45.201467 1909635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:18:45.246433 1909635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:18:45.281663 1909635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:18:45.422170 1909635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:18:45.562097 1909635 docker.go:234] disabling docker service ...
	I1222 00:18:45.562165 1909635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:18:45.583296 1909635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:18:45.597384 1909635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:18:45.718038 1909635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:18:45.839830 1909635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:18:45.853101 1909635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:18:45.867174 1909635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:18:45.867247 1909635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:18:45.876377 1909635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:18:45.876449 1909635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:18:45.886061 1909635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:18:45.894932 1909635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:18:45.903582 1909635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:18:45.911448 1909635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:18:45.920356 1909635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:18:45.933526 1909635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:18:45.942336 1909635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:18:45.949802 1909635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:18:45.957156 1909635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:18:46.068740 1909635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:18:46.224140 1909635 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:18:46.224210 1909635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:18:46.227948 1909635 start.go:564] Will wait 60s for crictl version
	I1222 00:18:46.228001 1909635 ssh_runner.go:195] Run: which crictl
	I1222 00:18:46.231211 1909635 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:18:46.258970 1909635 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:18:46.259051 1909635 ssh_runner.go:195] Run: crio --version
	I1222 00:18:46.288538 1909635 ssh_runner.go:195] Run: crio --version
	I1222 00:18:46.323589 1909635 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:18:46.326299 1909635 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:18:46.342194 1909635 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:18:46.346202 1909635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 00:18:46.355758 1909635 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:18:46.355870 1909635 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:18:46.355926 1909635 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:18:46.394037 1909635 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:18:46.394049 1909635 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:18:46.394103 1909635 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:18:46.418750 1909635 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:18:46.418762 1909635 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:18:46.418768 1909635 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:18:46.418857 1909635 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:18:46.418946 1909635 ssh_runner.go:195] Run: crio config
	I1222 00:18:46.490318 1909635 cni.go:84] Creating CNI manager for ""
	I1222 00:18:46.490329 1909635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:18:46.490349 1909635 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:18:46.490373 1909635 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:18:46.490500 1909635 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:18:46.490571 1909635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:18:46.498428 1909635 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:18:46.498494 1909635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:18:46.506220 1909635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:18:46.520467 1909635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:18:46.534094 1909635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 00:18:46.546905 1909635 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:18:46.550302 1909635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 00:18:46.560194 1909635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:18:46.672049 1909635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:18:46.690017 1909635 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:18:46.690028 1909635 certs.go:195] generating shared ca certs ...
	I1222 00:18:46.690041 1909635 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:18:46.690182 1909635 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:18:46.690220 1909635 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:18:46.690227 1909635 certs.go:257] generating profile certs ...
	I1222 00:18:46.690277 1909635 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:18:46.690289 1909635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt with IP's: []
	I1222 00:18:46.777563 1909635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt ...
	I1222 00:18:46.777579 1909635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: {Name:mkd7d05f9883f34cabab46453e72bc1dcfb82e05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:18:46.777790 1909635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key ...
	I1222 00:18:46.777797 1909635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key: {Name:mkfe0f1aceb22bd3350f453163febd01a2732169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:18:46.777888 1909635 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:18:46.777900 1909635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt.ceb2a50e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1222 00:18:47.948449 1909635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt.ceb2a50e ...
	I1222 00:18:47.948465 1909635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt.ceb2a50e: {Name:mk65b3f3e8345a8f0e6b3a2556a13ffd6ed0451e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:18:47.948670 1909635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e ...
	I1222 00:18:47.948687 1909635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e: {Name:mk0a4b3c88803078bf3ff6fb9afd8ca15bd08b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:18:47.948777 1909635 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt.ceb2a50e -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt
	I1222 00:18:47.948853 1909635 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key
	I1222 00:18:47.948907 1909635 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:18:47.948918 1909635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt with IP's: []
	I1222 00:18:48.168333 1909635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt ...
	I1222 00:18:48.168348 1909635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt: {Name:mk7e8533a6061b0490d110b68b18ef5ab72d9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:18:48.168540 1909635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key ...
	I1222 00:18:48.168549 1909635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key: {Name:mka4ce77f3e3acb197f6aa050fbcaf21fdbf8ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:18:48.168760 1909635 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:18:48.168799 1909635 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:18:48.168806 1909635 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:18:48.168836 1909635 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:18:48.168858 1909635 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:18:48.168880 1909635 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:18:48.168922 1909635 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:18:48.169506 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:18:48.188634 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:18:48.206837 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:18:48.224460 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:18:48.241795 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:18:48.259080 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:18:48.276580 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:18:48.293420 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:18:48.310507 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:18:48.327757 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:18:48.347659 1909635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:18:48.365736 1909635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:18:48.378475 1909635 ssh_runner.go:195] Run: openssl version
	I1222 00:18:48.384923 1909635 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:18:48.392442 1909635 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:18:48.400006 1909635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:18:48.404068 1909635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:18:48.404125 1909635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:18:48.446572 1909635 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:18:48.454126 1909635 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 00:18:48.461159 1909635 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:18:48.468268 1909635 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:18:48.475534 1909635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:18:48.479278 1909635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:18:48.479334 1909635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:18:48.520262 1909635 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:18:48.527614 1909635 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 00:18:48.534738 1909635 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:18:48.542155 1909635 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:18:48.549445 1909635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:18:48.553163 1909635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:18:48.553230 1909635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:18:48.594772 1909635 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:18:48.602225 1909635 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 00:18:48.609407 1909635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:18:48.612770 1909635 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 00:18:48.612813 1909635 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:18:48.612878 1909635 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:18:48.612930 1909635 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:18:48.638536 1909635 cri.go:96] found id: ""
	I1222 00:18:48.638595 1909635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:18:48.646412 1909635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:18:48.653973 1909635 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:18:48.654027 1909635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:18:48.661416 1909635 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:18:48.661433 1909635 kubeadm.go:158] found existing configuration files:
	
	I1222 00:18:48.661489 1909635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:18:48.668990 1909635 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:18:48.669045 1909635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:18:48.676385 1909635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:18:48.683611 1909635 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:18:48.683666 1909635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:18:48.690998 1909635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:18:48.698619 1909635 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:18:48.698676 1909635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:18:48.705932 1909635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:18:48.713537 1909635 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:18:48.713600 1909635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:18:48.720645 1909635 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:18:48.757678 1909635 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:18:48.757963 1909635 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:18:48.843248 1909635 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:18:48.843311 1909635 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:18:48.843345 1909635 kubeadm.go:319] OS: Linux
	I1222 00:18:48.843389 1909635 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:18:48.843436 1909635 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:18:48.843481 1909635 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:18:48.843528 1909635 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:18:48.843575 1909635 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:18:48.843621 1909635 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:18:48.843674 1909635 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:18:48.843720 1909635 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:18:48.843765 1909635 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:18:48.911454 1909635 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:18:48.911567 1909635 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:18:48.911682 1909635 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:18:48.919542 1909635 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:18:48.925866 1909635 out.go:252]   - Generating certificates and keys ...
	I1222 00:18:48.925955 1909635 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:18:48.926019 1909635 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:18:48.999075 1909635 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 00:18:49.208500 1909635 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 00:18:49.405634 1909635 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 00:18:49.789233 1909635 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 00:18:49.967814 1909635 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 00:18:49.968085 1909635 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-837733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 00:18:50.088235 1909635 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 00:18:50.088522 1909635 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-837733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 00:18:50.170181 1909635 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 00:18:50.658081 1909635 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 00:18:50.744229 1909635 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 00:18:50.744448 1909635 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:18:50.831260 1909635 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:18:51.020297 1909635 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:18:51.487291 1909635 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:18:52.117036 1909635 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:18:52.200345 1909635 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:18:52.201026 1909635 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:18:52.204230 1909635 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:18:52.207968 1909635 out.go:252]   - Booting up control plane ...
	I1222 00:18:52.208061 1909635 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:18:52.208137 1909635 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:18:52.209249 1909635 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:18:52.224760 1909635 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:18:52.225020 1909635 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:18:52.232383 1909635 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:18:52.232640 1909635 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:18:52.232820 1909635 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:18:52.368664 1909635 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:18:52.368839 1909635 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:22:52.369170 1909635 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000984278s
	I1222 00:22:52.369193 1909635 kubeadm.go:319] 
	I1222 00:22:52.369245 1909635 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:22:52.369276 1909635 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:22:52.369383 1909635 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:22:52.369388 1909635 kubeadm.go:319] 
	I1222 00:22:52.369500 1909635 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:22:52.369545 1909635 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:22:52.369573 1909635 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:22:52.369576 1909635 kubeadm.go:319] 
	I1222 00:22:52.375632 1909635 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:22:52.376045 1909635 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:22:52.376152 1909635 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:22:52.376385 1909635 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:22:52.376390 1909635 kubeadm.go:319] 
	I1222 00:22:52.376456 1909635 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:22:52.376567 1909635 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-837733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-837733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000984278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:22:52.377006 1909635 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:22:52.790312 1909635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:22:52.802783 1909635 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:22:52.802834 1909635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:22:52.810228 1909635 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:22:52.810236 1909635 kubeadm.go:158] found existing configuration files:
	
	I1222 00:22:52.810285 1909635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:22:52.817456 1909635 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:22:52.817507 1909635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:22:52.824644 1909635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:22:52.832084 1909635 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:22:52.832138 1909635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:22:52.839253 1909635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:22:52.846724 1909635 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:22:52.846786 1909635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:22:52.854236 1909635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:22:52.861398 1909635 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:22:52.861454 1909635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:22:52.868315 1909635 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:22:52.903322 1909635 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:22:52.903615 1909635 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:22:52.979425 1909635 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:22:52.979493 1909635 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:22:52.979527 1909635 kubeadm.go:319] OS: Linux
	I1222 00:22:52.979572 1909635 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:22:52.979619 1909635 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:22:52.979667 1909635 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:22:52.979714 1909635 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:22:52.979761 1909635 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:22:52.979813 1909635 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:22:52.979858 1909635 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:22:52.979905 1909635 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:22:52.979950 1909635 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:22:53.048394 1909635 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:22:53.048496 1909635 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:22:53.048585 1909635 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:22:53.055700 1909635 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:22:53.061009 1909635 out.go:252]   - Generating certificates and keys ...
	I1222 00:22:53.061098 1909635 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:22:53.061177 1909635 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:22:53.061252 1909635 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:22:53.061354 1909635 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:22:53.061436 1909635 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:22:53.061496 1909635 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:22:53.061561 1909635 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:22:53.061621 1909635 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:22:53.061694 1909635 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:22:53.061771 1909635 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:22:53.061807 1909635 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:22:53.061862 1909635 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:22:53.308560 1909635 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:22:53.651911 1909635 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:22:53.794315 1909635 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:22:54.067223 1909635 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:22:54.228941 1909635 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:22:54.229734 1909635 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:22:54.232381 1909635 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:22:54.235610 1909635 out.go:252]   - Booting up control plane ...
	I1222 00:22:54.235719 1909635 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:22:54.235805 1909635 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:22:54.236151 1909635 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:22:54.252366 1909635 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:22:54.252468 1909635 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:22:54.259649 1909635 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:22:54.259910 1909635 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:22:54.259951 1909635 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:22:54.391585 1909635 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:22:54.391698 1909635 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:26:54.392122 1909635 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001275115s
	I1222 00:26:54.392402 1909635 kubeadm.go:319] 
	I1222 00:26:54.392481 1909635 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:26:54.392514 1909635 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:26:54.392617 1909635 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:26:54.392620 1909635 kubeadm.go:319] 
	I1222 00:26:54.392723 1909635 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:26:54.392754 1909635 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:26:54.392783 1909635 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:26:54.392786 1909635 kubeadm.go:319] 
	I1222 00:26:54.397769 1909635 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:26:54.398182 1909635 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:26:54.398289 1909635 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:26:54.398523 1909635 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:26:54.398527 1909635 kubeadm.go:319] 
	I1222 00:26:54.398594 1909635 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:26:54.398642 1909635 kubeadm.go:403] duration metric: took 8m5.785832037s to StartCluster
	I1222 00:26:54.398674 1909635 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:26:54.398730 1909635 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:26:54.424112 1909635 cri.go:96] found id: ""
	I1222 00:26:54.424135 1909635 logs.go:282] 0 containers: []
	W1222 00:26:54.424142 1909635 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:26:54.424148 1909635 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:26:54.424208 1909635 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:26:54.447905 1909635 cri.go:96] found id: ""
	I1222 00:26:54.447918 1909635 logs.go:282] 0 containers: []
	W1222 00:26:54.447925 1909635 logs.go:284] No container was found matching "etcd"
	I1222 00:26:54.447930 1909635 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:26:54.447992 1909635 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:26:54.471627 1909635 cri.go:96] found id: ""
	I1222 00:26:54.471641 1909635 logs.go:282] 0 containers: []
	W1222 00:26:54.471648 1909635 logs.go:284] No container was found matching "coredns"
	I1222 00:26:54.471653 1909635 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:26:54.471709 1909635 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:26:54.497005 1909635 cri.go:96] found id: ""
	I1222 00:26:54.497018 1909635 logs.go:282] 0 containers: []
	W1222 00:26:54.497025 1909635 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:26:54.497030 1909635 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:26:54.497089 1909635 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:26:54.522984 1909635 cri.go:96] found id: ""
	I1222 00:26:54.522998 1909635 logs.go:282] 0 containers: []
	W1222 00:26:54.523005 1909635 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:26:54.523010 1909635 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:26:54.523068 1909635 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:26:54.550977 1909635 cri.go:96] found id: ""
	I1222 00:26:54.551002 1909635 logs.go:282] 0 containers: []
	W1222 00:26:54.551009 1909635 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:26:54.551014 1909635 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:26:54.551080 1909635 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:26:54.576290 1909635 cri.go:96] found id: ""
	I1222 00:26:54.576304 1909635 logs.go:282] 0 containers: []
	W1222 00:26:54.576311 1909635 logs.go:284] No container was found matching "kindnet"
	I1222 00:26:54.576319 1909635 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:26:54.576330 1909635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:26:54.643145 1909635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:26:54.634146    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:54.634730    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:54.636403    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:54.637026    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:54.638991    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:26:54.634146    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:54.634730    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:54.636403    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:54.637026    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:54.638991    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:26:54.643156 1909635 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:26:54.643167 1909635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:26:54.673819 1909635 logs.go:123] Gathering logs for container status ...
	I1222 00:26:54.673837 1909635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:26:54.703887 1909635 logs.go:123] Gathering logs for kubelet ...
	I1222 00:26:54.703903 1909635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:26:54.768381 1909635 logs.go:123] Gathering logs for dmesg ...
	I1222 00:26:54.768399 1909635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 00:26:54.783099 1909635 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001275115s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:26:54.783141 1909635 out.go:285] * 
	W1222 00:26:54.783309 1909635 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001275115s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:26:54.783331 1909635 out.go:285] * 
	W1222 00:26:54.785734 1909635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:26:54.791106 1909635 out.go:203] 
	W1222 00:26:54.793912 1909635 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001275115s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:26:54.793984 1909635 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:26:54.794011 1909635 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:26:54.797082 1909635 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.218337622Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.218383266Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.21843389Z" level=info msg="Create NRI interface"
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.21853952Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.218547544Z" level=info msg="runtime interface created"
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.218561124Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.218568147Z" level=info msg="runtime interface starting up..."
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.218573784Z" level=info msg="starting plugins..."
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.218586083Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:18:46 functional-837733 crio[842]: time="2025-12-22T00:18:46.218652042Z" level=info msg="No systemd watchdog enabled"
	Dec 22 00:18:46 functional-837733 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 22 00:18:48 functional-837733 crio[842]: time="2025-12-22T00:18:48.914728957Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=f2e22e19-1171-4369-8271-d9a62d700d1a name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:18:48 functional-837733 crio[842]: time="2025-12-22T00:18:48.915937882Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=83488fdd-f387-4d1a-9e4c-aaa9ce0cf037 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:18:48 functional-837733 crio[842]: time="2025-12-22T00:18:48.916447594Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1aed2a54-167b-4408-a23b-ca051750f1d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:18:48 functional-837733 crio[842]: time="2025-12-22T00:18:48.91696373Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=e4e3290a-18e0-4de1-9349-b1c82179cbc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:18:48 functional-837733 crio[842]: time="2025-12-22T00:18:48.917618956Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=9536beda-3432-4c4e-af15-f0c82f519d0f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:18:48 functional-837733 crio[842]: time="2025-12-22T00:18:48.918122326Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=524d2168-cd33-4a21-bcb3-5e588d4923f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:18:48 functional-837733 crio[842]: time="2025-12-22T00:18:48.918557324Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3a1d79ad-9d93-4a0d-8e8f-631491afdeaa name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:22:53 functional-837733 crio[842]: time="2025-12-22T00:22:53.051613225Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=5630428c-b75e-4a3d-84f1-4f59e645204a name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:22:53 functional-837733 crio[842]: time="2025-12-22T00:22:53.052255389Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=722cfec5-034a-4f5b-b01a-10814e87fc5e name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:22:53 functional-837733 crio[842]: time="2025-12-22T00:22:53.052796911Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=50e73f32-ede9-4002-bcee-a3e0da142bc6 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:22:53 functional-837733 crio[842]: time="2025-12-22T00:22:53.053217337Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=a2eba36e-1d52-467a-aac1-c8906af4d289 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:22:53 functional-837733 crio[842]: time="2025-12-22T00:22:53.053662567Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=e49ec629-f9f5-487b-b3ab-700d18085c75 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:22:53 functional-837733 crio[842]: time="2025-12-22T00:22:53.05415916Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=822d532b-1de9-41f2-b6d8-6a465e5a0f01 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:22:53 functional-837733 crio[842]: time="2025-12-22T00:22:53.054668913Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=1a8f56ac-99b0-414b-9199-547caf676ffe name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:26:55.741072    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:55.741869    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:55.743457    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:55.743774    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:26:55.745280    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:26:55 up 1 day,  7:09,  0 user,  load average: 0.07, 0.46, 1.07
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:26:52 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:26:53 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 22 00:26:53 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:26:53 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:26:53 functional-837733 kubelet[4809]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:26:53 functional-837733 kubelet[4809]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:26:53 functional-837733 kubelet[4809]: E1222 00:26:53.638991    4809 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:26:53 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:26:53 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:26:54 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 22 00:26:54 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:26:54 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:26:54 functional-837733 kubelet[4815]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:26:54 functional-837733 kubelet[4815]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:26:54 functional-837733 kubelet[4815]: E1222 00:26:54.393415    4815 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:26:54 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:26:54 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:26:55 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 22 00:26:55 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:26:55 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:26:55 functional-837733 kubelet[4913]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:26:55 functional-837733 kubelet[4913]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:26:55 functional-837733 kubelet[4913]: E1222 00:26:55.127552    4913 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:26:55 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:26:55 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 6 (360.391333ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 00:26:56.241507 1915444 status.go:458] kubeconfig endpoint: get endpoint: "functional-837733" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (501.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (368.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1222 00:26:56.258867 1881219 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-837733 --alsologtostderr -v=8
E1222 00:27:43.836135 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:28:11.526668 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:31:25.759148 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:32:43.835967 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:32:48.818903 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-837733 --alsologtostderr -v=8: exit status 80 (6m5.859964662s)

                                                
                                                
-- stdout --
	* [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:26:56.306253 1915516 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:26:56.306390 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306407 1915516 out.go:374] Setting ErrFile to fd 2...
	I1222 00:26:56.306413 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306666 1915516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:26:56.307062 1915516 out.go:368] Setting JSON to false
	I1222 00:26:56.307917 1915516 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112166,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:26:56.307982 1915516 start.go:143] virtualization:  
	I1222 00:26:56.313423 1915516 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:26:56.316413 1915516 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:26:56.316480 1915516 notify.go:221] Checking for updates...
	I1222 00:26:56.322104 1915516 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:26:56.324851 1915516 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:26:56.327697 1915516 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:26:56.330483 1915516 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:26:56.333306 1915516 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:26:56.336519 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:56.336644 1915516 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:26:56.361435 1915516 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:26:56.361557 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.415190 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.406057371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.415295 1915516 docker.go:319] overlay module found
	I1222 00:26:56.418305 1915516 out.go:179] * Using the docker driver based on existing profile
	I1222 00:26:56.421198 1915516 start.go:309] selected driver: docker
	I1222 00:26:56.421214 1915516 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.421302 1915516 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:26:56.421450 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.481024 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.471849294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.481474 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:56.481538 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:56.481593 1915516 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.484769 1915516 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:26:56.487560 1915516 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:26:56.490463 1915516 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:26:56.493542 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:56.493593 1915516 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:26:56.493605 1915516 cache.go:65] Caching tarball of preloaded images
	I1222 00:26:56.493623 1915516 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:26:56.493685 1915516 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:26:56.493695 1915516 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:26:56.493807 1915516 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:26:56.512598 1915516 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:26:56.512623 1915516 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:26:56.512639 1915516 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:26:56.512668 1915516 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:26:56.512725 1915516 start.go:364] duration metric: took 34.567µs to acquireMachinesLock for "functional-837733"
	I1222 00:26:56.512749 1915516 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:26:56.512754 1915516 fix.go:54] fixHost starting: 
	I1222 00:26:56.513015 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:26:56.528652 1915516 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:26:56.528683 1915516 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:26:56.531906 1915516 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:26:56.531939 1915516 machine.go:94] provisionDockerMachine start ...
	I1222 00:26:56.532017 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.548171 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.548493 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.548508 1915516 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:26:56.684687 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.684712 1915516 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:26:56.684788 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.701819 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.702143 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.702159 1915516 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:26:56.841689 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.841784 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.859107 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.859418 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.859440 1915516 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:26:56.989653 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:26:56.989683 1915516 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:26:56.989702 1915516 ubuntu.go:190] setting up certificates
	I1222 00:26:56.989718 1915516 provision.go:84] configureAuth start
	I1222 00:26:56.989779 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:57.009887 1915516 provision.go:143] copyHostCerts
	I1222 00:26:57.009938 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.009973 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:26:57.009985 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.010062 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:26:57.010160 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010187 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:26:57.010205 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010234 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:26:57.010285 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010306 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:26:57.010311 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010345 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:26:57.010415 1915516 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:26:57.201216 1915516 provision.go:177] copyRemoteCerts
	I1222 00:26:57.201293 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:26:57.201364 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.220369 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.317164 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:26:57.317296 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:26:57.334371 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:26:57.334443 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:26:57.355914 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:26:57.356016 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:26:57.374427 1915516 provision.go:87] duration metric: took 384.686066ms to configureAuth
	I1222 00:26:57.374456 1915516 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:26:57.374673 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:57.374803 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.396901 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:57.397214 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:57.397228 1915516 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:26:57.748387 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:26:57.748426 1915516 machine.go:97] duration metric: took 1.216463326s to provisionDockerMachine
	I1222 00:26:57.748438 1915516 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:26:57.748450 1915516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:26:57.748516 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:26:57.748562 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.764805 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.860955 1915516 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:26:57.863953 1915516 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:26:57.863971 1915516 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:26:57.863984 1915516 command_runner.go:130] > VERSION_ID="12"
	I1222 00:26:57.863988 1915516 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:26:57.863993 1915516 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:26:57.863997 1915516 command_runner.go:130] > ID=debian
	I1222 00:26:57.864002 1915516 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:26:57.864007 1915516 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:26:57.864014 1915516 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:26:57.864062 1915516 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:26:57.864078 1915516 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:26:57.864088 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:26:57.864138 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:26:57.864212 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:26:57.864218 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /etc/ssl/certs/18812192.pem
	I1222 00:26:57.864290 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:26:57.864295 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> /etc/test/nested/copy/1881219/hosts
	I1222 00:26:57.864343 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:26:57.871401 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:26:57.887952 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:26:57.904770 1915516 start.go:296] duration metric: took 156.318697ms for postStartSetup
	I1222 00:26:57.904856 1915516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:26:57.904900 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.924440 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.018813 1915516 command_runner.go:130] > 12%
	I1222 00:26:58.018908 1915516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:26:58.024077 1915516 command_runner.go:130] > 172G
	I1222 00:26:58.024155 1915516 fix.go:56] duration metric: took 1.511397372s for fixHost
	I1222 00:26:58.024173 1915516 start.go:83] releasing machines lock for "functional-837733", held for 1.511435197s
	I1222 00:26:58.024244 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:58.041457 1915516 ssh_runner.go:195] Run: cat /version.json
	I1222 00:26:58.041501 1915516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:26:58.041514 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.041557 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.062715 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.064581 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.169068 1915516 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:26:58.266136 1915516 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:26:58.269129 1915516 ssh_runner.go:195] Run: systemctl --version
	I1222 00:26:58.275371 1915516 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:26:58.275406 1915516 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:26:58.275466 1915516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:26:58.310186 1915516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:26:58.314515 1915516 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:26:58.314563 1915516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:26:58.314620 1915516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:26:58.322376 1915516 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:26:58.322401 1915516 start.go:496] detecting cgroup driver to use...
	I1222 00:26:58.322452 1915516 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:26:58.322524 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:26:58.337747 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:26:58.350732 1915516 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:26:58.350819 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:26:58.365932 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:26:58.378579 1915516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:26:58.492803 1915516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:26:58.604423 1915516 docker.go:234] disabling docker service ...
	I1222 00:26:58.604496 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:26:58.619028 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:26:58.631440 1915516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:26:58.747823 1915516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:26:58.866539 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:26:58.885381 1915516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:26:58.901152 1915516 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1222 00:26:58.902678 1915516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:26:58.902778 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.912460 1915516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:26:58.912567 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.924743 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.935404 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.944063 1915516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:26:58.951908 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.960893 1915516 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.969826 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.978699 1915516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:26:58.985309 1915516 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:26:58.986567 1915516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:26:58.993813 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.112143 1915516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:26:59.288646 1915516 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:26:59.288768 1915516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:26:59.292480 1915516 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1222 00:26:59.292507 1915516 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:26:59.292515 1915516 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1222 00:26:59.292522 1915516 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:26:59.292531 1915516 command_runner.go:130] > Access: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292537 1915516 command_runner.go:130] > Modify: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292542 1915516 command_runner.go:130] > Change: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292549 1915516 command_runner.go:130] >  Birth: -
	I1222 00:26:59.292591 1915516 start.go:564] Will wait 60s for crictl version
	I1222 00:26:59.292646 1915516 ssh_runner.go:195] Run: which crictl
	I1222 00:26:59.295996 1915516 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:26:59.296252 1915516 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:26:59.318308 1915516 command_runner.go:130] > Version:  0.1.0
	I1222 00:26:59.318367 1915516 command_runner.go:130] > RuntimeName:  cri-o
	I1222 00:26:59.318402 1915516 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1222 00:26:59.318421 1915516 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:26:59.320537 1915516 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:26:59.320652 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.347108 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.347172 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.347193 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.347237 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.347255 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.347273 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.347296 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.347319 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.347336 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.347352 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.347374 1915516 command_runner.go:130] >      static
	I1222 00:26:59.347390 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.347405 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.347421 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.347442 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.347466 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.347483 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.347500 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.347516 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.347534 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.350153 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.376069 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.376144 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.376164 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.376182 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.376201 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.376235 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.376261 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.376279 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.376305 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.376329 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.376347 1915516 command_runner.go:130] >      static
	I1222 00:26:59.376375 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.376391 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.376409 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.376433 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.376462 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.376479 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.376495 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.376513 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.376545 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.383960 1915516 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:26:59.386763 1915516 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:26:59.402614 1915516 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:26:59.406608 1915516 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:26:59.406693 1915516 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:26:59.406800 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:59.406861 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.440185 1915516 command_runner.go:130] > {
	I1222 00:26:59.440203 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.440208 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440216 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.440220 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440226 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.440230 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440234 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440243 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.440251 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.440254 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440259 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.440263 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440270 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440273 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440277 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440284 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.440288 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440301 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.440305 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440309 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440317 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.440326 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.440329 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440333 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.440337 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440344 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440347 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440350 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440357 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.440361 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440366 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.440369 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440373 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440381 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.440388 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.440392 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440396 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.440404 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.440408 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440411 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440414 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440421 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.440424 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440429 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.440433 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440437 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440445 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.440452 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.440455 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440459 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.440466 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440470 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440474 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440482 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440486 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440489 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440492 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440499 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.440502 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440509 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.440512 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440516 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440524 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.440532 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.440535 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440539 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.440542 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440546 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440549 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440553 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440557 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440560 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440563 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440569 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.440573 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440578 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.440582 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440586 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440594 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.440602 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.440606 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440610 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.440613 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440624 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440627 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440631 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440634 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440637 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440640 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440647 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.440651 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440656 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.440659 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440663 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440671 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.440679 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.440681 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440685 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.440689 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440693 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440697 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440700 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440706 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.440710 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440714 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.440718 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440721 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440729 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.440745 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.440748 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440752 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.440755 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440759 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440762 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440765 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440769 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440775 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440779 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440785 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.440789 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440793 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.440797 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440800 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440808 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.440816 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.440819 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440842 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.440846 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440850 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.440853 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440857 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440860 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.440863 1915516 command_runner.go:130] >     }
	I1222 00:26:59.440865 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.440868 1915516 command_runner.go:130] > }
	I1222 00:26:59.441031 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.441039 1915516 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:26:59.441099 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.463515 1915516 command_runner.go:130] > {
	I1222 00:26:59.463533 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.463538 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463547 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.463552 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463566 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.463570 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463574 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463582 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.463590 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.463594 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463598 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.463602 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463606 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463610 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463613 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463619 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.463624 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463629 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.463632 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463636 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463655 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.463664 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.463667 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463671 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.463675 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463680 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463683 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463686 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463693 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.463697 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463703 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.463706 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463710 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463718 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.463726 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.463730 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463734 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.463741 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.463745 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463748 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463752 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463758 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.463761 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463767 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.463771 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463775 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463782 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.463790 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.463793 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463797 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.463801 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463805 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463808 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463816 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463819 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463823 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463826 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463832 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.463836 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463841 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.463844 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463848 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463855 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.463863 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.463867 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463871 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.463875 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463878 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463882 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463885 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463893 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463896 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463899 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463905 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.463909 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463914 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.463917 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463921 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463929 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.463937 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.463940 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463944 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.463947 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463951 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463954 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463958 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463962 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463965 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463968 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463974 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.463979 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463984 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.463987 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463991 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463998 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.464006 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.464009 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464013 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.464017 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464021 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464024 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464027 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464033 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.464040 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464046 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.464050 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464053 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464061 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.464078 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.464081 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464085 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.464089 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464093 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.464096 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464100 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464103 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464106 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464109 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464115 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.464119 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464123 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.464126 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464130 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464137 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.464145 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.464148 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464151 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.464155 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464158 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.464161 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464165 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464168 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.464171 1915516 command_runner.go:130] >     }
	I1222 00:26:59.464174 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.464176 1915516 command_runner.go:130] > }
	I1222 00:26:59.467117 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.467182 1915516 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:26:59.467204 1915516 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:26:59.467349 1915516 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:26:59.467469 1915516 ssh_runner.go:195] Run: crio config
	I1222 00:26:59.517634 1915516 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1222 00:26:59.517672 1915516 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1222 00:26:59.517681 1915516 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1222 00:26:59.517684 1915516 command_runner.go:130] > #
	I1222 00:26:59.517692 1915516 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1222 00:26:59.517698 1915516 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1222 00:26:59.517704 1915516 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1222 00:26:59.517712 1915516 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1222 00:26:59.517716 1915516 command_runner.go:130] > # reload'.
	I1222 00:26:59.517723 1915516 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1222 00:26:59.517729 1915516 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1222 00:26:59.517735 1915516 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1222 00:26:59.517742 1915516 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1222 00:26:59.517745 1915516 command_runner.go:130] > [crio]
	I1222 00:26:59.517751 1915516 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1222 00:26:59.517756 1915516 command_runner.go:130] > # containers images, in this directory.
	I1222 00:26:59.517994 1915516 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1222 00:26:59.518006 1915516 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1222 00:26:59.518014 1915516 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1222 00:26:59.518022 1915516 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1222 00:26:59.518026 1915516 command_runner.go:130] > # imagestore = ""
	I1222 00:26:59.518032 1915516 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1222 00:26:59.518043 1915516 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1222 00:26:59.518048 1915516 command_runner.go:130] > # storage_driver = "overlay"
	I1222 00:26:59.518053 1915516 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1222 00:26:59.518059 1915516 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1222 00:26:59.518209 1915516 command_runner.go:130] > # storage_option = [
	I1222 00:26:59.518216 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.518229 1915516 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1222 00:26:59.518237 1915516 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1222 00:26:59.518375 1915516 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1222 00:26:59.518385 1915516 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1222 00:26:59.518392 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1222 00:26:59.518406 1915516 command_runner.go:130] > # always happen on a node reboot
	I1222 00:26:59.518412 1915516 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1222 00:26:59.518638 1915516 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1222 00:26:59.518654 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1222 00:26:59.518660 1915516 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1222 00:26:59.521919 1915516 command_runner.go:130] > # version_file_persist = ""
	I1222 00:26:59.521945 1915516 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1222 00:26:59.521986 1915516 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1222 00:26:59.521998 1915516 command_runner.go:130] > # internal_wipe = true
	I1222 00:26:59.522012 1915516 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1222 00:26:59.522023 1915516 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1222 00:26:59.522028 1915516 command_runner.go:130] > # internal_repair = true
	I1222 00:26:59.522049 1915516 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1222 00:26:59.522075 1915516 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1222 00:26:59.522087 1915516 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1222 00:26:59.522103 1915516 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1222 00:26:59.522130 1915516 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1222 00:26:59.522140 1915516 command_runner.go:130] > [crio.api]
	I1222 00:26:59.522156 1915516 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1222 00:26:59.522172 1915516 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1222 00:26:59.522199 1915516 command_runner.go:130] > # IP address on which the stream server will listen.
	I1222 00:26:59.522210 1915516 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1222 00:26:59.522227 1915516 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1222 00:26:59.522243 1915516 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1222 00:26:59.522251 1915516 command_runner.go:130] > # stream_port = "0"
	I1222 00:26:59.522257 1915516 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1222 00:26:59.522262 1915516 command_runner.go:130] > # stream_enable_tls = false
	I1222 00:26:59.522297 1915516 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1222 00:26:59.522308 1915516 command_runner.go:130] > # stream_idle_timeout = ""
	I1222 00:26:59.522315 1915516 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1222 00:26:59.522330 1915516 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522334 1915516 command_runner.go:130] > # stream_tls_cert = ""
	I1222 00:26:59.522352 1915516 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1222 00:26:59.522421 1915516 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522433 1915516 command_runner.go:130] > # stream_tls_key = ""
	I1222 00:26:59.522440 1915516 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1222 00:26:59.522455 1915516 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1222 00:26:59.522468 1915516 command_runner.go:130] > # automatically pick up the changes.
	I1222 00:26:59.522477 1915516 command_runner.go:130] > # stream_tls_ca = ""
	I1222 00:26:59.522505 1915516 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522519 1915516 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1222 00:26:59.522527 1915516 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522540 1915516 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1222 00:26:59.522547 1915516 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1222 00:26:59.522556 1915516 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1222 00:26:59.522560 1915516 command_runner.go:130] > [crio.runtime]
	I1222 00:26:59.522574 1915516 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1222 00:26:59.522584 1915516 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1222 00:26:59.522588 1915516 command_runner.go:130] > # "nofile=1024:2048"
	I1222 00:26:59.522599 1915516 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1222 00:26:59.522607 1915516 command_runner.go:130] > # default_ulimits = [
	I1222 00:26:59.522611 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522624 1915516 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1222 00:26:59.522631 1915516 command_runner.go:130] > # no_pivot = false
	I1222 00:26:59.522642 1915516 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1222 00:26:59.522652 1915516 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1222 00:26:59.522657 1915516 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1222 00:26:59.522672 1915516 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1222 00:26:59.522681 1915516 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1222 00:26:59.522689 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522700 1915516 command_runner.go:130] > # conmon = ""
	I1222 00:26:59.522705 1915516 command_runner.go:130] > # Cgroup setting for conmon
	I1222 00:26:59.522717 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1222 00:26:59.522721 1915516 command_runner.go:130] > conmon_cgroup = "pod"
	I1222 00:26:59.522733 1915516 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1222 00:26:59.522741 1915516 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1222 00:26:59.522760 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522767 1915516 command_runner.go:130] > # conmon_env = [
	I1222 00:26:59.522770 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522776 1915516 command_runner.go:130] > # Additional environment variables to set for all the
	I1222 00:26:59.522786 1915516 command_runner.go:130] > # containers. These are overridden if set in the
	I1222 00:26:59.522796 1915516 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1222 00:26:59.522803 1915516 command_runner.go:130] > # default_env = [
	I1222 00:26:59.522807 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522813 1915516 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1222 00:26:59.522829 1915516 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1222 00:26:59.522839 1915516 command_runner.go:130] > # selinux = false
	I1222 00:26:59.522846 1915516 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1222 00:26:59.522861 1915516 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1222 00:26:59.522870 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522874 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.522886 1915516 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1222 00:26:59.522895 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522900 1915516 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1222 00:26:59.522911 1915516 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1222 00:26:59.522926 1915516 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1222 00:26:59.522936 1915516 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1222 00:26:59.522948 1915516 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1222 00:26:59.522956 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522962 1915516 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1222 00:26:59.522976 1915516 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1222 00:26:59.522984 1915516 command_runner.go:130] > # the cgroup blockio controller.
	I1222 00:26:59.522992 1915516 command_runner.go:130] > # blockio_config_file = ""
	I1222 00:26:59.522999 1915516 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1222 00:26:59.523010 1915516 command_runner.go:130] > # blockio parameters.
	I1222 00:26:59.523015 1915516 command_runner.go:130] > # blockio_reload = false
	I1222 00:26:59.523027 1915516 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1222 00:26:59.523033 1915516 command_runner.go:130] > # irqbalance daemon.
	I1222 00:26:59.523049 1915516 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1222 00:26:59.523059 1915516 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1222 00:26:59.523067 1915516 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1222 00:26:59.523081 1915516 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1222 00:26:59.523091 1915516 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1222 00:26:59.523103 1915516 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1222 00:26:59.523112 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523116 1915516 command_runner.go:130] > # rdt_config_file = ""
	I1222 00:26:59.523122 1915516 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1222 00:26:59.523133 1915516 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1222 00:26:59.523140 1915516 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1222 00:26:59.523148 1915516 command_runner.go:130] > # separate_pull_cgroup = ""
	I1222 00:26:59.523159 1915516 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1222 00:26:59.523202 1915516 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1222 00:26:59.523210 1915516 command_runner.go:130] > # will be added.
	I1222 00:26:59.523215 1915516 command_runner.go:130] > # default_capabilities = [
	I1222 00:26:59.523219 1915516 command_runner.go:130] > # 	"CHOWN",
	I1222 00:26:59.523223 1915516 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1222 00:26:59.523380 1915516 command_runner.go:130] > # 	"FSETID",
	I1222 00:26:59.523391 1915516 command_runner.go:130] > # 	"FOWNER",
	I1222 00:26:59.523395 1915516 command_runner.go:130] > # 	"SETGID",
	I1222 00:26:59.523399 1915516 command_runner.go:130] > # 	"SETUID",
	I1222 00:26:59.523427 1915516 command_runner.go:130] > # 	"SETPCAP",
	I1222 00:26:59.523432 1915516 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1222 00:26:59.523436 1915516 command_runner.go:130] > # 	"KILL",
	I1222 00:26:59.523439 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523448 1915516 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1222 00:26:59.523464 1915516 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1222 00:26:59.523470 1915516 command_runner.go:130] > # add_inheritable_capabilities = false
	I1222 00:26:59.523476 1915516 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1222 00:26:59.523482 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523486 1915516 command_runner.go:130] > default_sysctls = [
	I1222 00:26:59.523491 1915516 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1222 00:26:59.523494 1915516 command_runner.go:130] > ]
	I1222 00:26:59.523499 1915516 command_runner.go:130] > # List of devices on the host that a
	I1222 00:26:59.523506 1915516 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1222 00:26:59.523509 1915516 command_runner.go:130] > # allowed_devices = [
	I1222 00:26:59.523513 1915516 command_runner.go:130] > # 	"/dev/fuse",
	I1222 00:26:59.523517 1915516 command_runner.go:130] > # 	"/dev/net/tun",
	I1222 00:26:59.523520 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523525 1915516 command_runner.go:130] > # List of additional devices. specified as
	I1222 00:26:59.523533 1915516 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1222 00:26:59.523538 1915516 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1222 00:26:59.523544 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523548 1915516 command_runner.go:130] > # additional_devices = [
	I1222 00:26:59.523551 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523557 1915516 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1222 00:26:59.523561 1915516 command_runner.go:130] > # cdi_spec_dirs = [
	I1222 00:26:59.523565 1915516 command_runner.go:130] > # 	"/etc/cdi",
	I1222 00:26:59.523569 1915516 command_runner.go:130] > # 	"/var/run/cdi",
	I1222 00:26:59.523574 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523581 1915516 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1222 00:26:59.523588 1915516 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1222 00:26:59.523591 1915516 command_runner.go:130] > # Defaults to false.
	I1222 00:26:59.523596 1915516 command_runner.go:130] > # device_ownership_from_security_context = false
	I1222 00:26:59.523603 1915516 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1222 00:26:59.523609 1915516 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1222 00:26:59.523612 1915516 command_runner.go:130] > # hooks_dir = [
	I1222 00:26:59.523617 1915516 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1222 00:26:59.523622 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523628 1915516 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1222 00:26:59.523635 1915516 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1222 00:26:59.523642 1915516 command_runner.go:130] > # its default mounts from the following two files:
	I1222 00:26:59.523644 1915516 command_runner.go:130] > #
	I1222 00:26:59.523651 1915516 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1222 00:26:59.523658 1915516 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1222 00:26:59.523663 1915516 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1222 00:26:59.523666 1915516 command_runner.go:130] > #
	I1222 00:26:59.523672 1915516 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1222 00:26:59.523679 1915516 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1222 00:26:59.523686 1915516 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1222 00:26:59.523692 1915516 command_runner.go:130] > #      only add mounts it finds in this file.
	I1222 00:26:59.523695 1915516 command_runner.go:130] > #
	I1222 00:26:59.523700 1915516 command_runner.go:130] > # default_mounts_file = ""
	I1222 00:26:59.523706 1915516 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1222 00:26:59.523713 1915516 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1222 00:26:59.523716 1915516 command_runner.go:130] > # pids_limit = -1
	I1222 00:26:59.523723 1915516 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1222 00:26:59.523729 1915516 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1222 00:26:59.523735 1915516 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1222 00:26:59.523744 1915516 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1222 00:26:59.523748 1915516 command_runner.go:130] > # log_size_max = -1
	I1222 00:26:59.523755 1915516 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1222 00:26:59.523759 1915516 command_runner.go:130] > # log_to_journald = false
	I1222 00:26:59.523765 1915516 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1222 00:26:59.523770 1915516 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1222 00:26:59.523775 1915516 command_runner.go:130] > # Path to directory for container attach sockets.
	I1222 00:26:59.523779 1915516 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1222 00:26:59.523784 1915516 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1222 00:26:59.523788 1915516 command_runner.go:130] > # bind_mount_prefix = ""
	I1222 00:26:59.523794 1915516 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1222 00:26:59.523798 1915516 command_runner.go:130] > # read_only = false
	I1222 00:26:59.523804 1915516 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1222 00:26:59.523810 1915516 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1222 00:26:59.523815 1915516 command_runner.go:130] > # live configuration reload.
	I1222 00:26:59.523818 1915516 command_runner.go:130] > # log_level = "info"
	I1222 00:26:59.523825 1915516 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1222 00:26:59.523830 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523834 1915516 command_runner.go:130] > # log_filter = ""
	I1222 00:26:59.523841 1915516 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523847 1915516 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1222 00:26:59.523850 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523858 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523862 1915516 command_runner.go:130] > # uid_mappings = ""
	I1222 00:26:59.523868 1915516 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523874 1915516 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1222 00:26:59.523879 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523887 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523891 1915516 command_runner.go:130] > # gid_mappings = ""
	I1222 00:26:59.523897 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1222 00:26:59.523903 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523909 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523917 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523921 1915516 command_runner.go:130] > # minimum_mappable_uid = -1
	I1222 00:26:59.523928 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1222 00:26:59.523934 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523940 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523948 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523952 1915516 command_runner.go:130] > # minimum_mappable_gid = -1
	I1222 00:26:59.523958 1915516 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1222 00:26:59.523964 1915516 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1222 00:26:59.523970 1915516 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1222 00:26:59.523975 1915516 command_runner.go:130] > # ctr_stop_timeout = 30
	I1222 00:26:59.523981 1915516 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1222 00:26:59.523987 1915516 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1222 00:26:59.523992 1915516 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1222 00:26:59.523997 1915516 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1222 00:26:59.524001 1915516 command_runner.go:130] > # drop_infra_ctr = true
	I1222 00:26:59.524011 1915516 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1222 00:26:59.524017 1915516 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1222 00:26:59.524025 1915516 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1222 00:26:59.524033 1915516 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1222 00:26:59.524041 1915516 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1222 00:26:59.524048 1915516 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1222 00:26:59.524053 1915516 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1222 00:26:59.524059 1915516 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1222 00:26:59.524063 1915516 command_runner.go:130] > # shared_cpuset = ""
	I1222 00:26:59.524068 1915516 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1222 00:26:59.524074 1915516 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1222 00:26:59.524078 1915516 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1222 00:26:59.524086 1915516 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1222 00:26:59.524090 1915516 command_runner.go:130] > # pinns_path = ""
	I1222 00:26:59.524095 1915516 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1222 00:26:59.524101 1915516 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1222 00:26:59.524106 1915516 command_runner.go:130] > # enable_criu_support = true
	I1222 00:26:59.524111 1915516 command_runner.go:130] > # Enable/disable the generation of the container,
	I1222 00:26:59.524117 1915516 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1222 00:26:59.524121 1915516 command_runner.go:130] > # enable_pod_events = false
	I1222 00:26:59.524128 1915516 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1222 00:26:59.524133 1915516 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1222 00:26:59.524137 1915516 command_runner.go:130] > # default_runtime = "crun"
	I1222 00:26:59.524142 1915516 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1222 00:26:59.524150 1915516 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1222 00:26:59.524159 1915516 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1222 00:26:59.524164 1915516 command_runner.go:130] > # creation as a file is not desired either.
	I1222 00:26:59.524173 1915516 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1222 00:26:59.524178 1915516 command_runner.go:130] > # the hostname is being managed dynamically.
	I1222 00:26:59.524182 1915516 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1222 00:26:59.524185 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.524191 1915516 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1222 00:26:59.524198 1915516 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1222 00:26:59.524204 1915516 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1222 00:26:59.524209 1915516 command_runner.go:130] > # Each entry in the table should follow the format:
	I1222 00:26:59.524212 1915516 command_runner.go:130] > #
	I1222 00:26:59.524217 1915516 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1222 00:26:59.524221 1915516 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1222 00:26:59.524226 1915516 command_runner.go:130] > # runtime_type = "oci"
	I1222 00:26:59.524230 1915516 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1222 00:26:59.524236 1915516 command_runner.go:130] > # inherit_default_runtime = false
	I1222 00:26:59.524246 1915516 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1222 00:26:59.524250 1915516 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1222 00:26:59.524255 1915516 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1222 00:26:59.524259 1915516 command_runner.go:130] > # monitor_env = []
	I1222 00:26:59.524264 1915516 command_runner.go:130] > # privileged_without_host_devices = false
	I1222 00:26:59.524268 1915516 command_runner.go:130] > # allowed_annotations = []
	I1222 00:26:59.524274 1915516 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1222 00:26:59.524277 1915516 command_runner.go:130] > # no_sync_log = false
	I1222 00:26:59.524281 1915516 command_runner.go:130] > # default_annotations = {}
	I1222 00:26:59.524285 1915516 command_runner.go:130] > # stream_websockets = false
	I1222 00:26:59.524289 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.524337 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.524343 1915516 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1222 00:26:59.524349 1915516 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1222 00:26:59.524356 1915516 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1222 00:26:59.524362 1915516 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1222 00:26:59.524365 1915516 command_runner.go:130] > #   in $PATH.
	I1222 00:26:59.524371 1915516 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1222 00:26:59.524376 1915516 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1222 00:26:59.524382 1915516 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1222 00:26:59.524386 1915516 command_runner.go:130] > #   state.
	I1222 00:26:59.524392 1915516 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1222 00:26:59.524399 1915516 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1222 00:26:59.524405 1915516 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1222 00:26:59.524411 1915516 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1222 00:26:59.524417 1915516 command_runner.go:130] > #   the values from the default runtime on load time.
	I1222 00:26:59.524423 1915516 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1222 00:26:59.524429 1915516 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1222 00:26:59.524435 1915516 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1222 00:26:59.524441 1915516 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1222 00:26:59.524446 1915516 command_runner.go:130] > #   The currently recognized values are:
	I1222 00:26:59.524453 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1222 00:26:59.524460 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1222 00:26:59.524466 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1222 00:26:59.524472 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1222 00:26:59.524479 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1222 00:26:59.524489 1915516 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1222 00:26:59.524496 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1222 00:26:59.524502 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1222 00:26:59.524508 1915516 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1222 00:26:59.524514 1915516 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1222 00:26:59.524521 1915516 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1222 00:26:59.524528 1915516 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1222 00:26:59.524534 1915516 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1222 00:26:59.524540 1915516 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1222 00:26:59.524546 1915516 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1222 00:26:59.524554 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1222 00:26:59.524560 1915516 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1222 00:26:59.524564 1915516 command_runner.go:130] > #   deprecated option "conmon".
	I1222 00:26:59.524571 1915516 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1222 00:26:59.524576 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1222 00:26:59.524583 1915516 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1222 00:26:59.524587 1915516 command_runner.go:130] > #   should be moved to the container's cgroup
	I1222 00:26:59.524594 1915516 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1222 00:26:59.524599 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1222 00:26:59.524606 1915516 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1222 00:26:59.524610 1915516 command_runner.go:130] > #   conmon-rs by using:
	I1222 00:26:59.524617 1915516 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1222 00:26:59.524625 1915516 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1222 00:26:59.524633 1915516 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1222 00:26:59.524640 1915516 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1222 00:26:59.524645 1915516 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1222 00:26:59.524652 1915516 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1222 00:26:59.524660 1915516 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1222 00:26:59.524665 1915516 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1222 00:26:59.524673 1915516 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1222 00:26:59.524683 1915516 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1222 00:26:59.524688 1915516 command_runner.go:130] > #   when a machine crash happens.
	I1222 00:26:59.524695 1915516 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1222 00:26:59.524703 1915516 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1222 00:26:59.524711 1915516 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1222 00:26:59.524715 1915516 command_runner.go:130] > #   seccomp profile for the runtime.
	I1222 00:26:59.524721 1915516 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1222 00:26:59.524729 1915516 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1222 00:26:59.524732 1915516 command_runner.go:130] > #
	I1222 00:26:59.524737 1915516 command_runner.go:130] > # Using the seccomp notifier feature:
	I1222 00:26:59.524739 1915516 command_runner.go:130] > #
	I1222 00:26:59.524745 1915516 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1222 00:26:59.524752 1915516 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1222 00:26:59.524755 1915516 command_runner.go:130] > #
	I1222 00:26:59.524761 1915516 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1222 00:26:59.524768 1915516 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1222 00:26:59.524770 1915516 command_runner.go:130] > #
	I1222 00:26:59.524776 1915516 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1222 00:26:59.524780 1915516 command_runner.go:130] > # feature.
	I1222 00:26:59.524783 1915516 command_runner.go:130] > #
	I1222 00:26:59.524789 1915516 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1222 00:26:59.524795 1915516 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1222 00:26:59.524801 1915516 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1222 00:26:59.524807 1915516 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1222 00:26:59.524813 1915516 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1222 00:26:59.524815 1915516 command_runner.go:130] > #
	I1222 00:26:59.524834 1915516 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1222 00:26:59.524841 1915516 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1222 00:26:59.524844 1915516 command_runner.go:130] > #
	I1222 00:26:59.524855 1915516 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1222 00:26:59.524861 1915516 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1222 00:26:59.524865 1915516 command_runner.go:130] > #
	I1222 00:26:59.524871 1915516 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1222 00:26:59.524877 1915516 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1222 00:26:59.524880 1915516 command_runner.go:130] > # limitation.
	I1222 00:26:59.524884 1915516 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1222 00:26:59.524890 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1222 00:26:59.524894 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524900 1915516 command_runner.go:130] > runtime_root = "/run/crun"
	I1222 00:26:59.524904 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524908 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524912 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524917 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524921 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524924 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524931 1915516 command_runner.go:130] > allowed_annotations = [
	I1222 00:26:59.524936 1915516 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1222 00:26:59.524939 1915516 command_runner.go:130] > ]
	I1222 00:26:59.524944 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524948 1915516 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1222 00:26:59.524952 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1222 00:26:59.524955 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524959 1915516 command_runner.go:130] > runtime_root = "/run/runc"
	I1222 00:26:59.524964 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524967 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524972 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524976 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524980 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524984 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524988 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524995 1915516 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1222 00:26:59.525000 1915516 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1222 00:26:59.525007 1915516 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1222 00:26:59.525015 1915516 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1222 00:26:59.525027 1915516 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1222 00:26:59.525037 1915516 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1222 00:26:59.525044 1915516 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1222 00:26:59.525050 1915516 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1222 00:26:59.525059 1915516 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1222 00:26:59.525068 1915516 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1222 00:26:59.525074 1915516 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1222 00:26:59.525081 1915516 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1222 00:26:59.525085 1915516 command_runner.go:130] > # Example:
	I1222 00:26:59.525089 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1222 00:26:59.525095 1915516 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1222 00:26:59.525100 1915516 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1222 00:26:59.525108 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1222 00:26:59.525111 1915516 command_runner.go:130] > # cpuset = "0-1"
	I1222 00:26:59.525115 1915516 command_runner.go:130] > # cpushares = "5"
	I1222 00:26:59.525119 1915516 command_runner.go:130] > # cpuquota = "1000"
	I1222 00:26:59.525122 1915516 command_runner.go:130] > # cpuperiod = "100000"
	I1222 00:26:59.525126 1915516 command_runner.go:130] > # cpulimit = "35"
	I1222 00:26:59.525129 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.525133 1915516 command_runner.go:130] > # The workload name is workload-type.
	I1222 00:26:59.525141 1915516 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1222 00:26:59.525147 1915516 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1222 00:26:59.525153 1915516 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1222 00:26:59.525161 1915516 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1222 00:26:59.525166 1915516 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1222 00:26:59.525171 1915516 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1222 00:26:59.525178 1915516 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1222 00:26:59.525182 1915516 command_runner.go:130] > # Default value is set to true
	I1222 00:26:59.525187 1915516 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1222 00:26:59.525192 1915516 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1222 00:26:59.525197 1915516 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1222 00:26:59.525201 1915516 command_runner.go:130] > # Default value is set to 'false'
	I1222 00:26:59.525205 1915516 command_runner.go:130] > # disable_hostport_mapping = false
	I1222 00:26:59.525211 1915516 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1222 00:26:59.525219 1915516 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1222 00:26:59.525222 1915516 command_runner.go:130] > # timezone = ""
	I1222 00:26:59.525229 1915516 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1222 00:26:59.525231 1915516 command_runner.go:130] > #
	I1222 00:26:59.525238 1915516 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1222 00:26:59.525244 1915516 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1222 00:26:59.525248 1915516 command_runner.go:130] > [crio.image]
	I1222 00:26:59.525254 1915516 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1222 00:26:59.525258 1915516 command_runner.go:130] > # default_transport = "docker://"
	I1222 00:26:59.525363 1915516 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1222 00:26:59.525377 1915516 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525381 1915516 command_runner.go:130] > # global_auth_file = ""
	I1222 00:26:59.525392 1915516 command_runner.go:130] > # The image used to instantiate infra containers.
	I1222 00:26:59.525398 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525403 1915516 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.525414 1915516 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1222 00:26:59.525425 1915516 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525432 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525436 1915516 command_runner.go:130] > # pause_image_auth_file = ""
	I1222 00:26:59.525442 1915516 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1222 00:26:59.525450 1915516 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1222 00:26:59.525460 1915516 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1222 00:26:59.525466 1915516 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1222 00:26:59.525474 1915516 command_runner.go:130] > # pause_command = "/pause"
	I1222 00:26:59.525480 1915516 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1222 00:26:59.525491 1915516 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1222 00:26:59.525499 1915516 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1222 00:26:59.525509 1915516 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1222 00:26:59.525515 1915516 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1222 00:26:59.525521 1915516 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1222 00:26:59.525527 1915516 command_runner.go:130] > # pinned_images = [
	I1222 00:26:59.525530 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525536 1915516 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1222 00:26:59.525545 1915516 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1222 00:26:59.525557 1915516 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1222 00:26:59.525563 1915516 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1222 00:26:59.525572 1915516 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1222 00:26:59.525578 1915516 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1222 00:26:59.525584 1915516 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1222 00:26:59.525594 1915516 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1222 00:26:59.525601 1915516 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1222 00:26:59.525608 1915516 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1222 00:26:59.525616 1915516 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1222 00:26:59.525621 1915516 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1222 00:26:59.525627 1915516 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1222 00:26:59.525634 1915516 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1222 00:26:59.525642 1915516 command_runner.go:130] > # changing them here.
	I1222 00:26:59.525648 1915516 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1222 00:26:59.525652 1915516 command_runner.go:130] > # insecure_registries = [
	I1222 00:26:59.525659 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525666 1915516 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1222 00:26:59.525676 1915516 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1222 00:26:59.525684 1915516 command_runner.go:130] > # image_volumes = "mkdir"
	I1222 00:26:59.525692 1915516 command_runner.go:130] > # Temporary directory to use for storing big files
	I1222 00:26:59.525697 1915516 command_runner.go:130] > # big_files_temporary_dir = ""
	I1222 00:26:59.525703 1915516 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1222 00:26:59.525711 1915516 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1222 00:26:59.525717 1915516 command_runner.go:130] > # auto_reload_registries = false
	I1222 00:26:59.525724 1915516 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1222 00:26:59.525734 1915516 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1222 00:26:59.525744 1915516 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1222 00:26:59.525748 1915516 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1222 00:26:59.525770 1915516 command_runner.go:130] > # The mode of short name resolution.
	I1222 00:26:59.525780 1915516 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1222 00:26:59.525788 1915516 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1222 00:26:59.525794 1915516 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1222 00:26:59.525800 1915516 command_runner.go:130] > # short_name_mode = "enforcing"
	I1222 00:26:59.525807 1915516 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1222 00:26:59.525813 1915516 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1222 00:26:59.525821 1915516 command_runner.go:130] > # oci_artifact_mount_support = true
	I1222 00:26:59.525829 1915516 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1222 00:26:59.525836 1915516 command_runner.go:130] > # CNI plugins.
	I1222 00:26:59.525843 1915516 command_runner.go:130] > [crio.network]
	I1222 00:26:59.525850 1915516 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1222 00:26:59.525859 1915516 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1222 00:26:59.525863 1915516 command_runner.go:130] > # cni_default_network = ""
	I1222 00:26:59.525869 1915516 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1222 00:26:59.525877 1915516 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1222 00:26:59.525883 1915516 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1222 00:26:59.525887 1915516 command_runner.go:130] > # plugin_dirs = [
	I1222 00:26:59.525891 1915516 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1222 00:26:59.525897 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525903 1915516 command_runner.go:130] > # List of included pod metrics.
	I1222 00:26:59.525908 1915516 command_runner.go:130] > # included_pod_metrics = [
	I1222 00:26:59.525913 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525920 1915516 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1222 00:26:59.525926 1915516 command_runner.go:130] > [crio.metrics]
	I1222 00:26:59.525932 1915516 command_runner.go:130] > # Globally enable or disable metrics support.
	I1222 00:26:59.525936 1915516 command_runner.go:130] > # enable_metrics = false
	I1222 00:26:59.525945 1915516 command_runner.go:130] > # Specify enabled metrics collectors.
	I1222 00:26:59.525950 1915516 command_runner.go:130] > # Per default all metrics are enabled.
	I1222 00:26:59.525957 1915516 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1222 00:26:59.525969 1915516 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1222 00:26:59.525976 1915516 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1222 00:26:59.525981 1915516 command_runner.go:130] > # metrics_collectors = [
	I1222 00:26:59.525987 1915516 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1222 00:26:59.525992 1915516 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1222 00:26:59.525998 1915516 command_runner.go:130] > # 	"containers_oom_total",
	I1222 00:26:59.526006 1915516 command_runner.go:130] > # 	"processes_defunct",
	I1222 00:26:59.526010 1915516 command_runner.go:130] > # 	"operations_total",
	I1222 00:26:59.526014 1915516 command_runner.go:130] > # 	"operations_latency_seconds",
	I1222 00:26:59.526019 1915516 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1222 00:26:59.526023 1915516 command_runner.go:130] > # 	"operations_errors_total",
	I1222 00:26:59.526028 1915516 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1222 00:26:59.526033 1915516 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1222 00:26:59.526037 1915516 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1222 00:26:59.526041 1915516 command_runner.go:130] > # 	"image_pulls_success_total",
	I1222 00:26:59.526045 1915516 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1222 00:26:59.526050 1915516 command_runner.go:130] > # 	"containers_oom_count_total",
	I1222 00:26:59.526055 1915516 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1222 00:26:59.526059 1915516 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1222 00:26:59.526064 1915516 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1222 00:26:59.526066 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526080 1915516 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1222 00:26:59.526084 1915516 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1222 00:26:59.526089 1915516 command_runner.go:130] > # The port on which the metrics server will listen.
	I1222 00:26:59.526093 1915516 command_runner.go:130] > # metrics_port = 9090
	I1222 00:26:59.526098 1915516 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1222 00:26:59.526102 1915516 command_runner.go:130] > # metrics_socket = ""
	I1222 00:26:59.526107 1915516 command_runner.go:130] > # The certificate for the secure metrics server.
	I1222 00:26:59.526118 1915516 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1222 00:26:59.526125 1915516 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1222 00:26:59.526130 1915516 command_runner.go:130] > # certificate on any modification event.
	I1222 00:26:59.526134 1915516 command_runner.go:130] > # metrics_cert = ""
	I1222 00:26:59.526139 1915516 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1222 00:26:59.526148 1915516 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1222 00:26:59.526151 1915516 command_runner.go:130] > # metrics_key = ""
	I1222 00:26:59.526157 1915516 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1222 00:26:59.526163 1915516 command_runner.go:130] > [crio.tracing]
	I1222 00:26:59.526169 1915516 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1222 00:26:59.526177 1915516 command_runner.go:130] > # enable_tracing = false
	I1222 00:26:59.526182 1915516 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1222 00:26:59.526187 1915516 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1222 00:26:59.526197 1915516 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1222 00:26:59.526205 1915516 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1222 00:26:59.526213 1915516 command_runner.go:130] > # CRI-O NRI configuration.
	I1222 00:26:59.526218 1915516 command_runner.go:130] > [crio.nri]
	I1222 00:26:59.526222 1915516 command_runner.go:130] > # Globally enable or disable NRI.
	I1222 00:26:59.526226 1915516 command_runner.go:130] > # enable_nri = true
	I1222 00:26:59.526230 1915516 command_runner.go:130] > # NRI socket to listen on.
	I1222 00:26:59.526238 1915516 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1222 00:26:59.526242 1915516 command_runner.go:130] > # NRI plugin directory to use.
	I1222 00:26:59.526247 1915516 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1222 00:26:59.526255 1915516 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1222 00:26:59.526264 1915516 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1222 00:26:59.526270 1915516 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1222 00:26:59.526331 1915516 command_runner.go:130] > # nri_disable_connections = false
	I1222 00:26:59.526342 1915516 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1222 00:26:59.526347 1915516 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1222 00:26:59.526352 1915516 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1222 00:26:59.526359 1915516 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1222 00:26:59.526364 1915516 command_runner.go:130] > # NRI default validator configuration.
	I1222 00:26:59.526374 1915516 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1222 00:26:59.526381 1915516 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1222 00:26:59.526389 1915516 command_runner.go:130] > # can be restricted/rejected:
	I1222 00:26:59.526393 1915516 command_runner.go:130] > # - OCI hook injection
	I1222 00:26:59.526399 1915516 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1222 00:26:59.526406 1915516 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1222 00:26:59.526411 1915516 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1222 00:26:59.526416 1915516 command_runner.go:130] > # - adjustment of linux namespaces
	I1222 00:26:59.526425 1915516 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1222 00:26:59.526435 1915516 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1222 00:26:59.526442 1915516 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1222 00:26:59.526448 1915516 command_runner.go:130] > #
	I1222 00:26:59.526452 1915516 command_runner.go:130] > # [crio.nri.default_validator]
	I1222 00:26:59.526457 1915516 command_runner.go:130] > # nri_enable_default_validator = false
	I1222 00:26:59.526467 1915516 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1222 00:26:59.526474 1915516 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1222 00:26:59.526485 1915516 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1222 00:26:59.526492 1915516 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1222 00:26:59.526500 1915516 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1222 00:26:59.526505 1915516 command_runner.go:130] > # nri_validator_required_plugins = [
	I1222 00:26:59.526510 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526516 1915516 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1222 00:26:59.526525 1915516 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1222 00:26:59.526530 1915516 command_runner.go:130] > [crio.stats]
	I1222 00:26:59.526543 1915516 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1222 00:26:59.526549 1915516 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1222 00:26:59.526556 1915516 command_runner.go:130] > # stats_collection_period = 0
	I1222 00:26:59.526563 1915516 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1222 00:26:59.526573 1915516 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1222 00:26:59.526577 1915516 command_runner.go:130] > # collection_period = 0
	I1222 00:26:59.526772 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.4923315Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1222 00:26:59.526793 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492368561Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1222 00:26:59.526804 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492394456Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1222 00:26:59.526811 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492418381Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1222 00:26:59.526829 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.49249673Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:59.526841 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492972301Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1222 00:26:59.526852 1915516 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1222 00:26:59.526929 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:59.526945 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:59.526969 1915516 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:26:59.526995 1915516 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:26:59.527121 1915516 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:26:59.527200 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:26:59.535433 1915516 command_runner.go:130] > kubeadm
	I1222 00:26:59.535512 1915516 command_runner.go:130] > kubectl
	I1222 00:26:59.535531 1915516 command_runner.go:130] > kubelet
	I1222 00:26:59.535576 1915516 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:26:59.535650 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:26:59.543039 1915516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:26:59.555537 1915516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:26:59.568459 1915516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 00:26:59.580757 1915516 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:26:59.584332 1915516 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:26:59.584456 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.724165 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:00.346235 1915516 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:27:00.346307 1915516 certs.go:195] generating shared ca certs ...
	I1222 00:27:00.346339 1915516 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:00.346533 1915516 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:27:00.346648 1915516 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:27:00.346710 1915516 certs.go:257] generating profile certs ...
	I1222 00:27:00.346876 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:27:00.347136 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:27:00.347262 1915516 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:27:00.347310 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:27:00.347347 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:27:00.347391 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:27:00.347455 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:27:00.347514 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:27:00.347563 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:27:00.347599 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:27:00.347655 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:27:00.347758 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:27:00.347824 1915516 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:27:00.347874 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:27:00.347933 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:27:00.347992 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:27:00.348063 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:27:00.348147 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:27:00.348210 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem -> /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.348254 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.348294 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.349116 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:27:00.373997 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:27:00.395587 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:27:00.415350 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:27:00.433576 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:27:00.451786 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:27:00.469999 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:27:00.488347 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:27:00.506732 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:27:00.524546 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:27:00.542404 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:27:00.559977 1915516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:27:00.572924 1915516 ssh_runner.go:195] Run: openssl version
	I1222 00:27:00.579133 1915516 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:27:00.579568 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.587905 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:27:00.596398 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600098 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600361 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600432 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.641247 1915516 command_runner.go:130] > 51391683
	I1222 00:27:00.641737 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:27:00.649412 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.657021 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:27:00.664905 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668819 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668897 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668950 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.710718 1915516 command_runner.go:130] > 3ec20f2e
	I1222 00:27:00.711212 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:27:00.718755 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.725802 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:27:00.733002 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736446 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736472 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736534 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.776933 1915516 command_runner.go:130] > b5213941
	I1222 00:27:00.777458 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:27:00.784943 1915516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788657 1915516 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788682 1915516 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:27:00.788688 1915516 command_runner.go:130] > Device: 259,1	Inode: 1585688     Links: 1
	I1222 00:27:00.788695 1915516 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:27:00.788729 1915516 command_runner.go:130] > Access: 2025-12-22 00:22:53.052243967 +0000
	I1222 00:27:00.788738 1915516 command_runner.go:130] > Modify: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788744 1915516 command_runner.go:130] > Change: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788752 1915516 command_runner.go:130] >  Birth: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788843 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:27:00.834300 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.834770 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:27:00.875032 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.875494 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:27:00.916072 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.916544 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:27:00.957168 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.957608 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:27:01.005250 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.005855 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:27:01.046783 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.047595 1915516 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:27:01.047694 1915516 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:27:01.047794 1915516 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:27:01.074422 1915516 cri.go:96] found id: ""
	I1222 00:27:01.074566 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:27:01.081706 1915516 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:27:01.081729 1915516 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:27:01.081736 1915516 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:27:01.082711 1915516 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:27:01.082752 1915516 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:27:01.082809 1915516 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:27:01.090505 1915516 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:27:01.090899 1915516 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-837733" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091003 1915516 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "functional-837733" cluster setting kubeconfig missing "functional-837733" context setting]
	I1222 00:27:01.091279 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.091694 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091853 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.092420 1915516 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:27:01.092441 1915516 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:27:01.092469 1915516 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:27:01.092508 1915516 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:27:01.092519 1915516 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:27:01.092524 1915516 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:27:01.092821 1915516 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:27:01.103318 1915516 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:27:01.103353 1915516 kubeadm.go:602] duration metric: took 20.58853ms to restartPrimaryControlPlane
	I1222 00:27:01.103366 1915516 kubeadm.go:403] duration metric: took 55.780111ms to StartCluster
	I1222 00:27:01.103381 1915516 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.103449 1915516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.104102 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.104313 1915516 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:27:01.104706 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:27:01.104752 1915516 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:27:01.104888 1915516 addons.go:70] Setting storage-provisioner=true in profile "functional-837733"
	I1222 00:27:01.104908 1915516 addons.go:239] Setting addon storage-provisioner=true in "functional-837733"
	I1222 00:27:01.104931 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.105709 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.105852 1915516 addons.go:70] Setting default-storageclass=true in profile "functional-837733"
	I1222 00:27:01.105876 1915516 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-837733"
	I1222 00:27:01.106124 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.111206 1915516 out.go:179] * Verifying Kubernetes components...
	I1222 00:27:01.114015 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:27:01.149988 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.150406 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.150740 1915516 addons.go:239] Setting addon default-storageclass=true in "functional-837733"
	I1222 00:27:01.150781 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.151633 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.160017 1915516 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:27:01.163813 1915516 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.163840 1915516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:27:01.163933 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.208933 1915516 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:01.208958 1915516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:27:01.209032 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.220299 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.259895 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.324483 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:01.352138 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.392769 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.095697 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.095745 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095787 1915516 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095683 1915516 node_ready.go:35] waiting up to 6m0s for node "functional-837733" to be "Ready" ...
	I1222 00:27:02.095893 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.095958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.096162 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.096189 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.096261 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.229628 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.250171 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.299403 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.303146 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.325073 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.325129 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.596516 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.596602 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.596960 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.700363 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.757664 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.757704 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.778883 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.845712 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.845792 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.096069 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.096495 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.596210 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.596328 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.596586 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.599774 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:03.608220 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:03.704291 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.704413 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.724087 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.724140 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.096645 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.096746 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.097060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.097110 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:04.395547 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:04.461482 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.461526 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.596939 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.597016 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.597354 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.908718 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:04.967177 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.967217 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:05.096456 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.096540 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.096895 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.596711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.096915 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.096990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:06.097365 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:06.122623 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:06.180609 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.181800 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 00:27:06.185073 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.244148 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:06.244195 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.596862 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.596990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.597266 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.095976 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.096054 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.596061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.596391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.650649 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:07.682081 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:07.723708 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.723755 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:07.756657 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.756702 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:08.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.096407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.596459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:08.596512 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:09.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.500982 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:09.557414 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:09.557460 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:09.596618 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.596687 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.596949 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.096786 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.096870 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.407596 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:10.466808 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:10.466847 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:10.596231 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.596316 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:10.596666 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:11.096583 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.096658 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.596951 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.597033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.597361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.095930 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.096006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.096359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.596103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.096064 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.096469 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:13.110706 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:13.162639 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:13.165994 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:13.596604 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.597025 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.096902 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.097221 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.596331 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.007048 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:15.075847 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:15.075897 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:15.095981 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.096063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.596111 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.596199 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.596504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.096151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.596451 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.596873 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.096633 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.096711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.097034 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.256498 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:17.320974 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:17.324801 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:17.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.596544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:17.596599 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.096285 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.096359 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.096666 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.596351 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.596417 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.596709 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.596186 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.596288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.596608 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:19.596658 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:20.095993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.596149 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.096326 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.096409 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.096724 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.596667 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.596736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.597043 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:21.597093 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:22.096478 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.096951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.596629 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.596721 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.597082 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.096795 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.096876 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.097130 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.596917 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.596995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.597294 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.941866 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:23.998448 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:23.998497 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:23.998521 1915516 retry.go:84] will retry after 7.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:24.096775 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.096858 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.097179 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.596921 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.596997 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.597304 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.096528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.596241 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.596313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.596669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.096357 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.096423 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.096702 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:26.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.596861 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.597278 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.096413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.596520 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:28.596577 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:29.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.096085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.096330 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.596121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.096205 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.096297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.096674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.321067 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:30.379433 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:30.379480 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.379505 1915516 retry.go:84] will retry after 9.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.596804 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.596878 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.597200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.597254 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:31.096701 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.096780 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.097109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.231597 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:31.293911 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:31.293951 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:31.596615 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.096660 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.596967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.597283 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.597352 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:33.096960 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.097032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.097391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.596091 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.596164 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.596230 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.596325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.596664 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.096343 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.096412 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.096685 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:35.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.096033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.596412 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.596479 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.596727 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.096555 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.096628 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.096973 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:37.097038 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.596788 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.596867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.095946 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.096029 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.096335 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.596466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.096281 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.096646 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.596430 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:40.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.096154 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.098739 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:40.155643 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:40.159357 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.159395 1915516 retry.go:84] will retry after 15s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.096329 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.096400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.096688 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.596638 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.596757 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.597079 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.597130 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:42.096986 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.097069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.101091 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1222 00:27:42.596907 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.596992 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.597277 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.596165 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.096105 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.096430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:44.096472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.596127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.096194 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.096288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.596379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.096150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:46.096534 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.458265 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:46.518675 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:46.518729 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.518755 1915516 retry.go:84] will retry after 28.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.596931 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.597020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.597346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.096075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.596079 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.096276 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.096625 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.096740 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.596414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.596137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.096214 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.096306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.596061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.596464 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.596521 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:51.096372 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.096448 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.596622 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.596716 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.096857 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.096927 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.097248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.096378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:53.096448 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.596524 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.096119 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.096565 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.596116 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.096066 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.096178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:55.096580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.200777 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:55.265479 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:55.265520 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.265546 1915516 retry.go:84] will retry after 39.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.595965 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.596049 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.596359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.095950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.096027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.096297 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.596381 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.596461 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.596751 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.096601 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.096888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.096932 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.596637 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.596710 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.596964 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.096723 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.096792 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.097169 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.596957 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.597032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.597316 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.596476 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:00.096143 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.096238 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.596214 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.596306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.596559 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.097205 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.596188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.596261 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.596603 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.596670 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.096310 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.096394 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.096715 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.596419 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.596515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.596851 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.096613 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.096689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.097037 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.596789 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.596864 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.597113 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.597153 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.096886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.097257 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.595963 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.596038 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.596345 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.596020 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.096257 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.096620 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:06.096682 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.596417 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.596491 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.596755 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.596270 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.596617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:08.596475 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:09.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.596234 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.096409 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.096501 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.096849 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:11.096946 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:11.596858 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.596941 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.597260 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.097034 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.097128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.097525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.596078 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.596158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:13.596572 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:14.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.096071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.678913 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:14.737274 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:14.740868 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:14.740903 1915516 retry.go:84] will retry after 44.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:15.096479 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.096556 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.096856 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.596004 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.596400 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.096510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.096575 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:16.596327 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.596405 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.096642 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.096912 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.596695 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.596773 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.597104 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.096925 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.097003 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.097359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:18.097417 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:18.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.596361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.096435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.596211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.596552 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.096244 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.596453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:20.596506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:21.096297 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.096367 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.096669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:21.596745 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.597117 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.096947 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.097021 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.097379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.596086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.596159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:22.596545 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:23.096015 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.096324 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:23.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.596150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.096052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.096138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.596263 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.596536 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:24.596589 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:25.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:25.596179 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.596271 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.596578 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.096014 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.096355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.596544 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.596874 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:26.596933 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:27.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.097066 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:27.596687 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.596753 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.597008 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.096768 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.096839 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:29.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.096076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.096374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:29.096425 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:29.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.596557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.096284 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.096366 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.096689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.596008 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.596351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:31.096276 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.096353 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.096679 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:31.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:31.596743 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.596816 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.096778 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.096851 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.097100 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.596895 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.596983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.597299 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.096044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.596138 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.596502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:33.596552 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:34.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.096436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:34.454097 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:28:34.514142 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514187 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514269 1915516 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:34.596441 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.596529 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.596855 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.096655 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.096737 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.097056 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.596959 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.597310 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:35.597390 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:36.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.096484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:36.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.596568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.096447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:38.096490 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:38.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.096163 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.596431 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:40.096080 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.096167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:40.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:40.596326 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.596447 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.596781 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.096558 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.096630 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.096876 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.596824 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.597258 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:42.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.096146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.097461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1222 00:28:42.097586 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:42.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.096540 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.596134 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.596546 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.596128 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.596215 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.596558 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:44.596612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:45.096183 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.096278 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.096726 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:45.596446 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.596802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.096553 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.096624 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.096959 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.596832 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:46.597275 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:47.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.095989 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.096242 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:47.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.596082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.596368 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.096094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.096173 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:49.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.096118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.096389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:49.096433 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:49.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.596170 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.596486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.096103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.096376 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:51.096359 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.096467 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.096794 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:51.096869 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:51.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.596678 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.596972 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.096756 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.096859 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.097229 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.595971 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.596373 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.096394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.596060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:53.596547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:54.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.096525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:54.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.596398 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.596109 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.596510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:56.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.096414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:56.096453 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:56.596382 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.596455 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.596777 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.096597 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.096690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.097021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.596765 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.596835 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.597084 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:58.096904 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.096994 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.097340 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:58.097391 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:58.596044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.596153 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.061108 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:59.096537 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.096605 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.096887 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.132532 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138577 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138673 1915516 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:59.141746 1915516 out.go:179] * Enabled addons: 
	I1222 00:28:59.145405 1915516 addons.go:530] duration metric: took 1m58.040642899s for enable addons: enabled=[]
	I1222 00:28:59.596293 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.596374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.596770 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.096670 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.097201 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.595928 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.596014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:00.596370 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:01.096378 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.096452 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.096753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:01.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.596676 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.596994 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.096700 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.096774 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.097108 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.596912 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.596985 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.597314 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:02.597388 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:03.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:03.596014 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.596079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.096091 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.596416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:05.095995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:05.096466 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:05.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.096196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.096324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.096662 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.596476 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.596558 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.596812 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:07.096580 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.096655 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.096995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:07.097052 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:07.596669 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.597087 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.096888 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.096974 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.595949 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.596080 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.095985 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.096064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.096372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.596027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.596300 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:09.596339 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:10.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.096125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.096502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:10.596205 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.596303 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.596585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.096568 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.096641 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.096925 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.596903 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.596980 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.597315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:11.597389 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:12.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.096485 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:12.595972 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.596042 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.596348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.596162 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.596244 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.596528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:14.096197 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:14.096594 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:14.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.596443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.096585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.596256 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.596324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.596577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.596455 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.596531 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.596865 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:16.596917 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.096729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:17.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.596427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.596236 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.596315 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.596584 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:19.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:19.096504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:19.596189 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.596268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.596595 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.096286 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.096362 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:21.096400 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.096769 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:21.096815 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:21.596733 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.597125 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.096830 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.096913 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.097244 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.596138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.596522 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:23.596485 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:24.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.096081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:24.596151 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.596506 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.096466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.596494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:25.596557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:26.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.096391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:26.596314 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.596400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.596729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.096554 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.096633 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.097006 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.596775 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.596844 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:27.597137 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:28.096938 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.097014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.097401 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:28.595911 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.595983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.596321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.095997 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.096068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.096361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:30.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.096489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:30.096547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:30.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.596327 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.096344 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.096420 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.596746 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.596817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.597107 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:32.096894 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.096967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:32.097281 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:32.596869 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.596969 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.597275 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.096111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.596315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.096102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.096423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.596130 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:34.596568 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:35.096117 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:35.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.596074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.596449 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.596524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.596791 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:36.596841 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:37.096628 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.096714 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.097074 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:37.596870 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.596949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.597264 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.096987 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.097079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.097412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.596255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:39.096335 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.096415 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.096735 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:39.096791 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:39.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.596081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.596386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.096071 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.096512 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.596469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.096427 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.096493 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.096765 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.596776 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.596860 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.597207 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:41.597262 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:42.097196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.097290 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.097753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:42.596494 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.596559 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.596818 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.096412 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.096485 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.096813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.596584 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.596660 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.596988 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:44.096629 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.096696 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:44.097007 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:44.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.596896 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.597195 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.097028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.097117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.097583 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.596273 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.596360 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.596616 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.096043 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:46.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:47.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.096065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.096384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:47.596069 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.096477 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.596372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:49.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:49.096506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:49.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.596446 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.096191 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.596046 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:51.096262 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.096340 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.096671 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:51.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:51.596674 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.597007 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.096802 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.096883 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.097238 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.596097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:53.596472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:54.096190 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.096617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:54.595974 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.596313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.096097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.596182 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.596258 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.596563 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:55.596609 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:56.096131 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:56.596450 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.596522 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.596813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:58.096113 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.096201 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:58.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:58.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.096348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:00.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.096548 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:00.096612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:00.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.096631 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.096833 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.596402 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.596486 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.596923 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:02.096741 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.096817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.097099 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:02.097145 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:02.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.596554 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.596891 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.096678 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.096749 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.097049 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.596893 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.597160 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:04.096926 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.097001 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.097385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:04.097441 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:04.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.096136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.596009 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.596350 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:06.596396 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:07.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:07.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.596115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.096165 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:08.596499 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:09.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.096074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:09.596090 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.596516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.096073 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.096161 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:11.096517 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.096604 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:11.097278 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:11.596969 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.597372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.096147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.596282 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.596358 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.096032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.096445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.596479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:13.596523 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:14.096010 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.096415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:14.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.096501 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.596363 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:16.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:16.096639 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:16.596380 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.596453 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.596745 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.096475 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.096802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.596630 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.596731 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.597138 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:18.096998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.097081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.097420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:18.097478 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:18.596096 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.096545 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.096076 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:20.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:21.096445 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.096524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.096840 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:21.596761 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.596832 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.597105 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.096883 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.096964 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.097271 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.596982 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.597056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.597440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:22.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:23.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.596157 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.596226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:25.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:25.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:25.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.596100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.596424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.096120 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.096534 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.596546 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.596888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:27.096591 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.096677 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.097053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:27.097108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:27.596848 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.596955 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.596475 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.096142 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.096209 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:29.596527 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:30.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:30.596200 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.596525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.096447 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.096541 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.596722 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.596793 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.597128 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:31.597181 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:32.096921 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.096995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:32.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.596177 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.596451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:34.096133 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.096208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:34.096604 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:34.596306 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.596398 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.596763 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.096549 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.096621 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.096956 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.596693 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.596770 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:36.096908 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.096988 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.097351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:36.097407 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:36.596345 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.596419 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.596680 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.096197 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.596567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:38.596620 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:39.096295 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.096370 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.596381 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.096092 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.596269 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.596348 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.596667 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:40.596717 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:41.096474 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.096825 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:41.596734 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.596808 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.597164 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.096930 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.097022 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.596367 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:43.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:43.096482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:43.596115 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.596196 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.596500 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.096056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.096329 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.595987 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.596060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:45.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.096182 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.096632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:45.096698 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:45.596155 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.596221 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.096083 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.596090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.096110 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.096527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.596194 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.596265 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.596618 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:47.596676 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:48.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.096139 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:48.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.596328 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.096096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.596175 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.596561 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:50.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.096061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.096371 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:50.096418 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:50.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.596449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.096398 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.096473 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.096792 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.596692 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.596763 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.597029 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:52.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.096954 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.097313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:52.097385 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:52.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.596133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.095986 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.096060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.096450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.596389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:54.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:55.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.096554 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:55.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.596140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.096255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.596413 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.596495 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.597103 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:56.597155 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:57.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.096939 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.097279 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:57.596919 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.596998 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.597248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.096981 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.097060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.595936 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.596020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.596366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:59.096048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:59.096443 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:59.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.596437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.112227 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.112324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.112677 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.596304 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.596376 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.596639 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:01.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.096650 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.096999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:01.097053 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:01.596950 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.597377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.096463 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.096586 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.096933 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.596593 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.596668 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.597021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:03.096676 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.096752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.097161 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:03.097215 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:03.596799 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.596879 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.597210 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.095999 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.596068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.096006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.096369 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.596027 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.596106 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.596440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:05.596494 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:06.096082 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.096460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:06.596324 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.596435 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.596689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.596220 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.596550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:07.596603 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:08.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.096360 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:08.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.096598 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.596123 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.596195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:10.096085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.096519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:10.096576 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:10.596225 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.596297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.096706 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.097010 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.596932 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.597006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.597391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.096467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:12.596356 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:13.096021 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:13.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.096148 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.096479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.596057 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:14.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:15.096068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.096144 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:15.596144 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.596224 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.596509 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.596457 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.596533 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.596848 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:16.596913 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:17.096611 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.096699 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.097003 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:17.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.596727 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.597060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.096680 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.096762 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.097085 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.596689 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.596752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.597053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:18.597096 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:19.096874 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.096949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.097291 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:19.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.096486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.596098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.596435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:21.096417 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.096492 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.096805 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:21.096849 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:21.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.596686 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.596951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.096805 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.096887 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.097262 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.596005 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.596438 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.096077 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.096377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:23.596497 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:24.096207 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.096291 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:24.595992 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.596065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.596365 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.096162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.096470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:25.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:26.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.096259 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:26.596641 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.596715 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.597033 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.096832 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.096920 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.097265 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:28.096050 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:28.096502 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:28.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.095982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.096048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.096296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:30.096147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.096226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.096531 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:30.096584 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:30.595996 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.096270 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.596652 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.596724 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.597083 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:32.096810 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.096900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.097155 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:32.097194 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:32.595938 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.596018 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.596399 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.096514 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.096448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.596176 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.596250 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.596573 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:34.596630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:35.096275 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.096626 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:35.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.596118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.595986 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.596055 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.596358 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:37.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:37.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:37.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.596167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.596423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:39.596461 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:40.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:40.596239 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.596317 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.596658 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.096446 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.096515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.096773 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.596719 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.596796 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.597144 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:41.597198 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:42.095939 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.096035 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:42.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.596396 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.596213 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.596294 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.596619 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:44.096156 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.096508 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:44.096557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.096206 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.096300 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.096704 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.596071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.596317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.096028 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.096449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.596566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:46.596617 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:47.095989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.096301 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:47.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.096107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:49.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.096455 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:49.096507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:49.596031 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.096219 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.596217 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.596292 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.596622 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:51.096688 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.096769 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.097192 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:51.097261 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:51.595908 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.595986 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.596268 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.095979 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.596273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.596678 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.096355 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.096427 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.596491 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:53.596544 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:54.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.096480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:54.596158 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.096042 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.596488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:56.096195 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.096268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:56.096624 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:56.596612 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.597015 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.096867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.097193 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.595906 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.595972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.596218 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:58.097085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.097172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.097567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:58.097619 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:58.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.596241 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.596588 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.096257 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.096325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.096568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.596240 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.596636 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.096222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.096550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.596394 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.596465 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.596730 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:00.596772 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:01.096774 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.096885 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.097366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:01.595973 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.597013 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.096796 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.096863 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.097153 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.596892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.596965 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:02.597411 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:03.095937 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.096024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.096410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:03.596093 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.596163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.596410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.596053 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.596474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:05.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.096107 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:05.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:05.596083 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.596157 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.096062 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.096137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.096505 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.596315 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.596381 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.596649 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:07.096321 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.096392 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:07.096788 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:07.596571 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.596648 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.596948 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.096706 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.096778 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.097061 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.596819 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.596900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.597237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.095966 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.096044 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.596095 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:09.596474 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:10.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:10.596067 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.596141 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.096744 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.096823 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.097172 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.595924 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.596004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.596347 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.096155 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:12.096542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:12.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.596092 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.096106 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:14.596542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:15.096232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.096651 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:15.596333 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.596407 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.596674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.096354 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.096430 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.096766 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.596753 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.596828 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.597170 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:16.597226 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:17.096933 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.097004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.097253 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:17.595951 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.596024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.095977 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.096058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.096393 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.596069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:19.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.096412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:19.096467 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:19.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.096172 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.096518 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.596013 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.596415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:21.096789 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.096865 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.097200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:21.097256 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:21.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.596143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.596421 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.096046 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.096346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:23.596435 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:24.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.096108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:24.596160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.596230 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.596159 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.596240 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.596576 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:25.596629 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:26.096303 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.096374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.096637 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:26.596570 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.596649 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.596995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.096704 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.596727 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.596794 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.597086 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:27.597135 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:28.096901 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.096979 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.097317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:28.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.096139 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.096211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.096453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:30.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.096532 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:30.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:30.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.596179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.096503 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.096583 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.596830 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.597245 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.095958 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.096033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.096334 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.596022 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:32.596482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:33.096166 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.096574 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:33.596105 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.096492 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.596212 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.596289 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.596632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:34.596687 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:35.096345 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.096429 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.096712 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:35.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.596142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.596460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.096237 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.596663 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.596739 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.597059 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:36.597108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:37.096872 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:37.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.596113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:39.096175 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:39.096631 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:39.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.096140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.096474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.596119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:41.096370 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.096433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:41.096759 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:41.596613 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.596690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.596999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.096724 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.096822 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.097289 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.596970 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.597040 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.597359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.096020 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.595988 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.596385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:43.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:44.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:44.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.096209 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.096309 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.096703 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.595999 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.596318 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:46.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:46.096587 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:46.596358 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.596433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.096505 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.096569 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.096863 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.597077 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:48.096892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.096972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:48.097376 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:48.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.096001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.596021 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.596390 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.096418 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.595985 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.596374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:50.596428 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:51.096388 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.096466 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.096827 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:51.596847 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.596923 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.597181 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.095961 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.096037 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.096403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.596120 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.596192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.596527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:52.596580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:53.096225 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.096307 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:53.596232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.596643 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.096242 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.096330 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.096701 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.596365 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.596434 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.596698 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:54.596749 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:55.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:55.596071 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.596147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.096173 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.596488 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.596560 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.596914 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:56.596966 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:57.096719 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.096795 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.097343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:57.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.596070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.096471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:59.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.096321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:59.096369 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:59.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.096235 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.096597 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.596299 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.596383 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.596831 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:01.096422 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.096517 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.096896 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:33:01.096952 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:33:01.596985 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.597111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.597684 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:02.096224 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:02.096288 1915516 node_ready.go:38] duration metric: took 6m0.000473787s for node "functional-837733" to be "Ready" ...
	I1222 00:33:02.099371 1915516 out.go:203] 
	W1222 00:33:02.102245 1915516 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:33:02.102270 1915516 out.go:285] * 
	* 
	W1222 00:33:02.104397 1915516 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:33:02.107416 1915516 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-837733 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.461182096s for "functional-837733" cluster.
I1222 00:33:02.720068 1881219 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (341.696852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-885873 image rm kicbase/echo-server:functional-885873 --alsologtostderr                                                              │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                             │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image save --daemon kicbase/echo-server:functional-885873 --alsologtostderr                                                   │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/test/nested/copy/1881219/hosts                                                                              │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/1881219.pem                                                                                       │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /usr/share/ca-certificates/1881219.pem                                                                           │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                        │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/18812192.pem                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /usr/share/ca-certificates/18812192.pem                                                                          │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format short --alsologtostderr                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format yaml --alsologtostderr                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh pgrep buildkitd                                                                                                           │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ image          │ functional-885873 image ls --format json --alsologtostderr                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format table --alsologtostderr                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr                                          │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ delete         │ -p functional-885873                                                                                                                            │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ start          │ -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ start          │ -p functional-837733 --alsologtostderr -v=8                                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:26 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:26:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:26:56.306253 1915516 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:26:56.306390 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306407 1915516 out.go:374] Setting ErrFile to fd 2...
	I1222 00:26:56.306413 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306666 1915516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:26:56.307062 1915516 out.go:368] Setting JSON to false
	I1222 00:26:56.307917 1915516 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112166,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:26:56.307982 1915516 start.go:143] virtualization:  
	I1222 00:26:56.313423 1915516 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:26:56.316413 1915516 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:26:56.316480 1915516 notify.go:221] Checking for updates...
	I1222 00:26:56.322104 1915516 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:26:56.324851 1915516 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:26:56.327697 1915516 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:26:56.330483 1915516 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:26:56.333306 1915516 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:26:56.336519 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:56.336644 1915516 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:26:56.361435 1915516 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:26:56.361557 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.415190 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.406057371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.415295 1915516 docker.go:319] overlay module found
	I1222 00:26:56.418305 1915516 out.go:179] * Using the docker driver based on existing profile
	I1222 00:26:56.421198 1915516 start.go:309] selected driver: docker
	I1222 00:26:56.421214 1915516 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.421302 1915516 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:26:56.421450 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.481024 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.471849294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.481474 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:56.481538 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:56.481593 1915516 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.484769 1915516 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:26:56.487560 1915516 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:26:56.490463 1915516 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:26:56.493542 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:56.493593 1915516 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:26:56.493605 1915516 cache.go:65] Caching tarball of preloaded images
	I1222 00:26:56.493623 1915516 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:26:56.493685 1915516 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:26:56.493695 1915516 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:26:56.493807 1915516 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:26:56.512598 1915516 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:26:56.512623 1915516 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:26:56.512639 1915516 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:26:56.512668 1915516 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:26:56.512725 1915516 start.go:364] duration metric: took 34.567µs to acquireMachinesLock for "functional-837733"
	I1222 00:26:56.512749 1915516 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:26:56.512754 1915516 fix.go:54] fixHost starting: 
	I1222 00:26:56.513015 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:26:56.528652 1915516 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:26:56.528683 1915516 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:26:56.531906 1915516 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:26:56.531939 1915516 machine.go:94] provisionDockerMachine start ...
	I1222 00:26:56.532017 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.548171 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.548493 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.548508 1915516 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:26:56.684687 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.684712 1915516 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:26:56.684788 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.701819 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.702143 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.702159 1915516 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:26:56.841689 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.841784 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.859107 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.859418 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.859440 1915516 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:26:56.989653 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:26:56.989683 1915516 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:26:56.989702 1915516 ubuntu.go:190] setting up certificates
	I1222 00:26:56.989718 1915516 provision.go:84] configureAuth start
	I1222 00:26:56.989779 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:57.009887 1915516 provision.go:143] copyHostCerts
	I1222 00:26:57.009938 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.009973 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:26:57.009985 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.010062 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:26:57.010160 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010187 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:26:57.010205 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010234 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:26:57.010285 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010306 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:26:57.010311 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010345 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:26:57.010415 1915516 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:26:57.201216 1915516 provision.go:177] copyRemoteCerts
	I1222 00:26:57.201293 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:26:57.201364 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.220369 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.317164 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:26:57.317296 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:26:57.334371 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:26:57.334443 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:26:57.355914 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:26:57.356016 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:26:57.374427 1915516 provision.go:87] duration metric: took 384.686066ms to configureAuth
	I1222 00:26:57.374456 1915516 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:26:57.374673 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:57.374803 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.396901 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:57.397214 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:57.397228 1915516 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:26:57.748387 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:26:57.748426 1915516 machine.go:97] duration metric: took 1.216463326s to provisionDockerMachine
	I1222 00:26:57.748438 1915516 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:26:57.748450 1915516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:26:57.748516 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:26:57.748562 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.764805 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.860955 1915516 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:26:57.863953 1915516 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:26:57.863971 1915516 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:26:57.863984 1915516 command_runner.go:130] > VERSION_ID="12"
	I1222 00:26:57.863988 1915516 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:26:57.863993 1915516 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:26:57.863997 1915516 command_runner.go:130] > ID=debian
	I1222 00:26:57.864002 1915516 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:26:57.864007 1915516 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:26:57.864014 1915516 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:26:57.864062 1915516 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:26:57.864078 1915516 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:26:57.864088 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:26:57.864138 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:26:57.864212 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:26:57.864218 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /etc/ssl/certs/18812192.pem
	I1222 00:26:57.864290 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:26:57.864295 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> /etc/test/nested/copy/1881219/hosts
	I1222 00:26:57.864343 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:26:57.871401 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:26:57.887952 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:26:57.904770 1915516 start.go:296] duration metric: took 156.318697ms for postStartSetup
	I1222 00:26:57.904856 1915516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:26:57.904900 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.924440 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.018813 1915516 command_runner.go:130] > 12%
	I1222 00:26:58.018908 1915516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:26:58.024077 1915516 command_runner.go:130] > 172G
	I1222 00:26:58.024155 1915516 fix.go:56] duration metric: took 1.511397372s for fixHost
	I1222 00:26:58.024173 1915516 start.go:83] releasing machines lock for "functional-837733", held for 1.511435197s
	I1222 00:26:58.024244 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:58.041457 1915516 ssh_runner.go:195] Run: cat /version.json
	I1222 00:26:58.041501 1915516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:26:58.041514 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.041557 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.062715 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.064581 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.169068 1915516 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:26:58.266136 1915516 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:26:58.269129 1915516 ssh_runner.go:195] Run: systemctl --version
	I1222 00:26:58.275371 1915516 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:26:58.275406 1915516 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:26:58.275466 1915516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:26:58.310186 1915516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:26:58.314515 1915516 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:26:58.314563 1915516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:26:58.314620 1915516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:26:58.322376 1915516 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:26:58.322401 1915516 start.go:496] detecting cgroup driver to use...
	I1222 00:26:58.322452 1915516 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:26:58.322524 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:26:58.337747 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:26:58.350732 1915516 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:26:58.350819 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:26:58.365932 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:26:58.378579 1915516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:26:58.492803 1915516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:26:58.604423 1915516 docker.go:234] disabling docker service ...
	I1222 00:26:58.604496 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:26:58.619028 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:26:58.631440 1915516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:26:58.747823 1915516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:26:58.866539 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:26:58.885381 1915516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:26:58.901152 1915516 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1222 00:26:58.902678 1915516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:26:58.902778 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.912460 1915516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:26:58.912567 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.924743 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.935404 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.944063 1915516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:26:58.951908 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.960893 1915516 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.969826 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.978699 1915516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:26:58.985309 1915516 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:26:58.986567 1915516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:26:58.993813 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.112143 1915516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:26:59.288646 1915516 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:26:59.288768 1915516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:26:59.292480 1915516 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1222 00:26:59.292507 1915516 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:26:59.292515 1915516 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1222 00:26:59.292522 1915516 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:26:59.292531 1915516 command_runner.go:130] > Access: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292537 1915516 command_runner.go:130] > Modify: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292542 1915516 command_runner.go:130] > Change: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292549 1915516 command_runner.go:130] >  Birth: -
	I1222 00:26:59.292591 1915516 start.go:564] Will wait 60s for crictl version
	I1222 00:26:59.292646 1915516 ssh_runner.go:195] Run: which crictl
	I1222 00:26:59.295996 1915516 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:26:59.296252 1915516 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:26:59.318308 1915516 command_runner.go:130] > Version:  0.1.0
	I1222 00:26:59.318367 1915516 command_runner.go:130] > RuntimeName:  cri-o
	I1222 00:26:59.318402 1915516 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1222 00:26:59.318421 1915516 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:26:59.320537 1915516 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:26:59.320652 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.347108 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.347172 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.347193 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.347237 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.347255 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.347273 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.347296 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.347319 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.347336 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.347352 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.347374 1915516 command_runner.go:130] >      static
	I1222 00:26:59.347390 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.347405 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.347421 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.347442 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.347466 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.347483 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.347500 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.347516 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.347534 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.350153 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.376069 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.376144 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.376164 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.376182 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.376201 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.376235 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.376261 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.376279 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.376305 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.376329 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.376347 1915516 command_runner.go:130] >      static
	I1222 00:26:59.376375 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.376391 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.376409 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.376433 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.376462 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.376479 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.376495 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.376513 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.376545 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.383960 1915516 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:26:59.386763 1915516 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:26:59.402614 1915516 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:26:59.406608 1915516 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:26:59.406693 1915516 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:26:59.406800 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:59.406861 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.440185 1915516 command_runner.go:130] > {
	I1222 00:26:59.440203 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.440208 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440216 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.440220 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440226 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.440230 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440234 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440243 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.440251 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.440254 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440259 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.440263 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440270 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440273 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440277 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440284 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.440288 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440301 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.440305 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440309 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440317 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.440326 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.440329 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440333 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.440337 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440344 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440347 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440350 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440357 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.440361 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440366 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.440369 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440373 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440381 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.440388 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.440392 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440396 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.440404 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.440408 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440411 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440414 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440421 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.440424 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440429 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.440433 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440437 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440445 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.440452 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.440455 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440459 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.440466 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440470 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440474 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440482 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440486 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440489 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440492 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440499 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.440502 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440509 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.440512 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440516 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440524 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.440532 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.440535 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440539 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.440542 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440546 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440549 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440553 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440557 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440560 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440563 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440569 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.440573 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440578 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.440582 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440586 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440594 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.440602 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.440606 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440610 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.440613 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440624 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440627 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440631 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440634 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440637 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440640 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440647 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.440651 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440656 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.440659 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440663 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440671 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.440679 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.440681 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440685 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.440689 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440693 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440697 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440700 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440706 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.440710 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440714 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.440718 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440721 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440729 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.440745 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.440748 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440752 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.440755 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440759 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440762 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440765 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440769 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440775 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440779 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440785 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.440789 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440793 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.440797 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440800 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440808 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.440816 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.440819 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440842 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.440846 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440850 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.440853 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440857 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440860 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.440863 1915516 command_runner.go:130] >     }
	I1222 00:26:59.440865 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.440868 1915516 command_runner.go:130] > }
	I1222 00:26:59.441031 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.441039 1915516 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:26:59.441099 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.463515 1915516 command_runner.go:130] > {
	I1222 00:26:59.463533 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.463538 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463547 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.463552 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463566 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.463570 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463574 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463582 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.463590 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.463594 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463598 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.463602 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463606 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463610 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463613 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463619 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.463624 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463629 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.463632 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463636 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463655 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.463664 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.463667 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463671 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.463675 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463680 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463683 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463686 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463693 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.463697 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463703 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.463706 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463710 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463718 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.463726 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.463730 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463734 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.463741 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.463745 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463748 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463752 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463758 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.463761 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463767 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.463771 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463775 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463782 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.463790 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.463793 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463797 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.463801 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463805 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463808 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463816 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463819 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463823 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463826 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463832 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.463836 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463841 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.463844 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463848 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463855 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.463863 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.463867 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463871 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.463875 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463878 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463882 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463885 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463893 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463896 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463899 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463905 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.463909 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463914 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.463917 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463921 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463929 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.463937 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.463940 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463944 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.463947 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463951 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463954 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463958 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463962 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463965 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463968 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463974 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.463979 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463984 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.463987 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463991 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463998 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.464006 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.464009 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464013 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.464017 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464021 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464024 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464027 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464033 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.464040 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464046 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.464050 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464053 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464061 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.464078 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.464081 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464085 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.464089 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464093 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.464096 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464100 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464103 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464106 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464109 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464115 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.464119 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464123 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.464126 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464130 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464137 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.464145 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.464148 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464151 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.464155 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464158 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.464161 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464165 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464168 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.464171 1915516 command_runner.go:130] >     }
	I1222 00:26:59.464174 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.464176 1915516 command_runner.go:130] > }
	I1222 00:26:59.467117 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.467182 1915516 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:26:59.467204 1915516 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:26:59.467349 1915516 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:26:59.467469 1915516 ssh_runner.go:195] Run: crio config
	I1222 00:26:59.517634 1915516 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1222 00:26:59.517672 1915516 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1222 00:26:59.517681 1915516 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1222 00:26:59.517684 1915516 command_runner.go:130] > #
	I1222 00:26:59.517692 1915516 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1222 00:26:59.517698 1915516 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1222 00:26:59.517704 1915516 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1222 00:26:59.517712 1915516 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1222 00:26:59.517716 1915516 command_runner.go:130] > # reload'.
	I1222 00:26:59.517723 1915516 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1222 00:26:59.517729 1915516 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1222 00:26:59.517735 1915516 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1222 00:26:59.517742 1915516 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1222 00:26:59.517745 1915516 command_runner.go:130] > [crio]
	I1222 00:26:59.517751 1915516 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1222 00:26:59.517756 1915516 command_runner.go:130] > # containers images, in this directory.
	I1222 00:26:59.517994 1915516 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1222 00:26:59.518006 1915516 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1222 00:26:59.518014 1915516 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1222 00:26:59.518022 1915516 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1222 00:26:59.518026 1915516 command_runner.go:130] > # imagestore = ""
	I1222 00:26:59.518032 1915516 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1222 00:26:59.518043 1915516 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1222 00:26:59.518048 1915516 command_runner.go:130] > # storage_driver = "overlay"
	I1222 00:26:59.518053 1915516 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1222 00:26:59.518059 1915516 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1222 00:26:59.518209 1915516 command_runner.go:130] > # storage_option = [
	I1222 00:26:59.518216 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.518229 1915516 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1222 00:26:59.518237 1915516 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1222 00:26:59.518375 1915516 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1222 00:26:59.518385 1915516 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1222 00:26:59.518392 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1222 00:26:59.518406 1915516 command_runner.go:130] > # always happen on a node reboot
	I1222 00:26:59.518412 1915516 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1222 00:26:59.518638 1915516 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1222 00:26:59.518654 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1222 00:26:59.518660 1915516 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1222 00:26:59.521919 1915516 command_runner.go:130] > # version_file_persist = ""
	I1222 00:26:59.521945 1915516 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1222 00:26:59.521986 1915516 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1222 00:26:59.521998 1915516 command_runner.go:130] > # internal_wipe = true
	I1222 00:26:59.522012 1915516 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1222 00:26:59.522023 1915516 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1222 00:26:59.522028 1915516 command_runner.go:130] > # internal_repair = true
	I1222 00:26:59.522049 1915516 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1222 00:26:59.522075 1915516 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1222 00:26:59.522087 1915516 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1222 00:26:59.522103 1915516 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1222 00:26:59.522130 1915516 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1222 00:26:59.522140 1915516 command_runner.go:130] > [crio.api]
	I1222 00:26:59.522156 1915516 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1222 00:26:59.522172 1915516 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1222 00:26:59.522199 1915516 command_runner.go:130] > # IP address on which the stream server will listen.
	I1222 00:26:59.522210 1915516 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1222 00:26:59.522227 1915516 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1222 00:26:59.522243 1915516 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1222 00:26:59.522251 1915516 command_runner.go:130] > # stream_port = "0"
	I1222 00:26:59.522257 1915516 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1222 00:26:59.522262 1915516 command_runner.go:130] > # stream_enable_tls = false
	I1222 00:26:59.522297 1915516 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1222 00:26:59.522308 1915516 command_runner.go:130] > # stream_idle_timeout = ""
	I1222 00:26:59.522315 1915516 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1222 00:26:59.522330 1915516 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522334 1915516 command_runner.go:130] > # stream_tls_cert = ""
	I1222 00:26:59.522352 1915516 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1222 00:26:59.522421 1915516 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522433 1915516 command_runner.go:130] > # stream_tls_key = ""
	I1222 00:26:59.522440 1915516 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1222 00:26:59.522455 1915516 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1222 00:26:59.522468 1915516 command_runner.go:130] > # automatically pick up the changes.
	I1222 00:26:59.522477 1915516 command_runner.go:130] > # stream_tls_ca = ""
	I1222 00:26:59.522505 1915516 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522519 1915516 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1222 00:26:59.522527 1915516 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522540 1915516 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1222 00:26:59.522547 1915516 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1222 00:26:59.522556 1915516 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1222 00:26:59.522560 1915516 command_runner.go:130] > [crio.runtime]
	I1222 00:26:59.522574 1915516 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1222 00:26:59.522584 1915516 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1222 00:26:59.522588 1915516 command_runner.go:130] > # "nofile=1024:2048"
	I1222 00:26:59.522599 1915516 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1222 00:26:59.522607 1915516 command_runner.go:130] > # default_ulimits = [
	I1222 00:26:59.522611 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522624 1915516 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1222 00:26:59.522631 1915516 command_runner.go:130] > # no_pivot = false
	I1222 00:26:59.522642 1915516 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1222 00:26:59.522652 1915516 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1222 00:26:59.522657 1915516 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1222 00:26:59.522672 1915516 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1222 00:26:59.522681 1915516 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1222 00:26:59.522689 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522700 1915516 command_runner.go:130] > # conmon = ""
	I1222 00:26:59.522705 1915516 command_runner.go:130] > # Cgroup setting for conmon
	I1222 00:26:59.522717 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1222 00:26:59.522721 1915516 command_runner.go:130] > conmon_cgroup = "pod"
	I1222 00:26:59.522733 1915516 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1222 00:26:59.522741 1915516 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1222 00:26:59.522760 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522767 1915516 command_runner.go:130] > # conmon_env = [
	I1222 00:26:59.522770 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522776 1915516 command_runner.go:130] > # Additional environment variables to set for all the
	I1222 00:26:59.522786 1915516 command_runner.go:130] > # containers. These are overridden if set in the
	I1222 00:26:59.522796 1915516 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1222 00:26:59.522803 1915516 command_runner.go:130] > # default_env = [
	I1222 00:26:59.522807 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522813 1915516 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1222 00:26:59.522829 1915516 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1222 00:26:59.522839 1915516 command_runner.go:130] > # selinux = false
	I1222 00:26:59.522846 1915516 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1222 00:26:59.522861 1915516 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1222 00:26:59.522870 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522874 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.522886 1915516 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1222 00:26:59.522895 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522900 1915516 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1222 00:26:59.522911 1915516 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1222 00:26:59.522926 1915516 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1222 00:26:59.522936 1915516 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1222 00:26:59.522948 1915516 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1222 00:26:59.522956 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522962 1915516 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1222 00:26:59.522976 1915516 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1222 00:26:59.522984 1915516 command_runner.go:130] > # the cgroup blockio controller.
	I1222 00:26:59.522992 1915516 command_runner.go:130] > # blockio_config_file = ""
	I1222 00:26:59.522999 1915516 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1222 00:26:59.523010 1915516 command_runner.go:130] > # blockio parameters.
	I1222 00:26:59.523015 1915516 command_runner.go:130] > # blockio_reload = false
	I1222 00:26:59.523027 1915516 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1222 00:26:59.523033 1915516 command_runner.go:130] > # irqbalance daemon.
	I1222 00:26:59.523049 1915516 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1222 00:26:59.523059 1915516 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1222 00:26:59.523067 1915516 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1222 00:26:59.523081 1915516 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1222 00:26:59.523091 1915516 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1222 00:26:59.523103 1915516 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1222 00:26:59.523112 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523116 1915516 command_runner.go:130] > # rdt_config_file = ""
	I1222 00:26:59.523122 1915516 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1222 00:26:59.523133 1915516 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1222 00:26:59.523140 1915516 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1222 00:26:59.523148 1915516 command_runner.go:130] > # separate_pull_cgroup = ""
	I1222 00:26:59.523159 1915516 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1222 00:26:59.523202 1915516 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1222 00:26:59.523210 1915516 command_runner.go:130] > # will be added.
	I1222 00:26:59.523215 1915516 command_runner.go:130] > # default_capabilities = [
	I1222 00:26:59.523219 1915516 command_runner.go:130] > # 	"CHOWN",
	I1222 00:26:59.523223 1915516 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1222 00:26:59.523380 1915516 command_runner.go:130] > # 	"FSETID",
	I1222 00:26:59.523391 1915516 command_runner.go:130] > # 	"FOWNER",
	I1222 00:26:59.523395 1915516 command_runner.go:130] > # 	"SETGID",
	I1222 00:26:59.523399 1915516 command_runner.go:130] > # 	"SETUID",
	I1222 00:26:59.523427 1915516 command_runner.go:130] > # 	"SETPCAP",
	I1222 00:26:59.523432 1915516 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1222 00:26:59.523436 1915516 command_runner.go:130] > # 	"KILL",
	I1222 00:26:59.523439 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523448 1915516 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1222 00:26:59.523464 1915516 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1222 00:26:59.523470 1915516 command_runner.go:130] > # add_inheritable_capabilities = false
	I1222 00:26:59.523476 1915516 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1222 00:26:59.523482 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523486 1915516 command_runner.go:130] > default_sysctls = [
	I1222 00:26:59.523491 1915516 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1222 00:26:59.523494 1915516 command_runner.go:130] > ]
	I1222 00:26:59.523499 1915516 command_runner.go:130] > # List of devices on the host that a
	I1222 00:26:59.523506 1915516 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1222 00:26:59.523509 1915516 command_runner.go:130] > # allowed_devices = [
	I1222 00:26:59.523513 1915516 command_runner.go:130] > # 	"/dev/fuse",
	I1222 00:26:59.523517 1915516 command_runner.go:130] > # 	"/dev/net/tun",
	I1222 00:26:59.523520 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523525 1915516 command_runner.go:130] > # List of additional devices. specified as
	I1222 00:26:59.523533 1915516 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1222 00:26:59.523538 1915516 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1222 00:26:59.523544 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523548 1915516 command_runner.go:130] > # additional_devices = [
	I1222 00:26:59.523551 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523557 1915516 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1222 00:26:59.523561 1915516 command_runner.go:130] > # cdi_spec_dirs = [
	I1222 00:26:59.523565 1915516 command_runner.go:130] > # 	"/etc/cdi",
	I1222 00:26:59.523569 1915516 command_runner.go:130] > # 	"/var/run/cdi",
	I1222 00:26:59.523574 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523581 1915516 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1222 00:26:59.523588 1915516 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1222 00:26:59.523591 1915516 command_runner.go:130] > # Defaults to false.
	I1222 00:26:59.523596 1915516 command_runner.go:130] > # device_ownership_from_security_context = false
	I1222 00:26:59.523603 1915516 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1222 00:26:59.523609 1915516 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1222 00:26:59.523612 1915516 command_runner.go:130] > # hooks_dir = [
	I1222 00:26:59.523617 1915516 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1222 00:26:59.523622 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523628 1915516 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1222 00:26:59.523635 1915516 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1222 00:26:59.523642 1915516 command_runner.go:130] > # its default mounts from the following two files:
	I1222 00:26:59.523644 1915516 command_runner.go:130] > #
	I1222 00:26:59.523651 1915516 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1222 00:26:59.523658 1915516 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1222 00:26:59.523663 1915516 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1222 00:26:59.523666 1915516 command_runner.go:130] > #
	I1222 00:26:59.523672 1915516 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1222 00:26:59.523679 1915516 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1222 00:26:59.523686 1915516 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1222 00:26:59.523692 1915516 command_runner.go:130] > #      only add mounts it finds in this file.
	I1222 00:26:59.523695 1915516 command_runner.go:130] > #
	I1222 00:26:59.523700 1915516 command_runner.go:130] > # default_mounts_file = ""
	I1222 00:26:59.523706 1915516 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1222 00:26:59.523713 1915516 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1222 00:26:59.523716 1915516 command_runner.go:130] > # pids_limit = -1
	I1222 00:26:59.523723 1915516 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1222 00:26:59.523729 1915516 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1222 00:26:59.523735 1915516 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1222 00:26:59.523744 1915516 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1222 00:26:59.523748 1915516 command_runner.go:130] > # log_size_max = -1
	I1222 00:26:59.523755 1915516 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1222 00:26:59.523759 1915516 command_runner.go:130] > # log_to_journald = false
	I1222 00:26:59.523765 1915516 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1222 00:26:59.523770 1915516 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1222 00:26:59.523775 1915516 command_runner.go:130] > # Path to directory for container attach sockets.
	I1222 00:26:59.523779 1915516 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1222 00:26:59.523784 1915516 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1222 00:26:59.523788 1915516 command_runner.go:130] > # bind_mount_prefix = ""
	I1222 00:26:59.523794 1915516 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1222 00:26:59.523798 1915516 command_runner.go:130] > # read_only = false
	I1222 00:26:59.523804 1915516 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1222 00:26:59.523810 1915516 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1222 00:26:59.523815 1915516 command_runner.go:130] > # live configuration reload.
	I1222 00:26:59.523818 1915516 command_runner.go:130] > # log_level = "info"
	I1222 00:26:59.523825 1915516 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1222 00:26:59.523830 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523834 1915516 command_runner.go:130] > # log_filter = ""
	I1222 00:26:59.523841 1915516 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523847 1915516 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1222 00:26:59.523850 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523858 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523862 1915516 command_runner.go:130] > # uid_mappings = ""
	I1222 00:26:59.523868 1915516 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523874 1915516 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1222 00:26:59.523879 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523887 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523891 1915516 command_runner.go:130] > # gid_mappings = ""
	I1222 00:26:59.523897 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1222 00:26:59.523903 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523909 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523917 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523921 1915516 command_runner.go:130] > # minimum_mappable_uid = -1
	I1222 00:26:59.523928 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1222 00:26:59.523934 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523940 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523948 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523952 1915516 command_runner.go:130] > # minimum_mappable_gid = -1
	I1222 00:26:59.523958 1915516 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1222 00:26:59.523964 1915516 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1222 00:26:59.523970 1915516 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1222 00:26:59.523975 1915516 command_runner.go:130] > # ctr_stop_timeout = 30
	I1222 00:26:59.523981 1915516 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1222 00:26:59.523987 1915516 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1222 00:26:59.523992 1915516 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1222 00:26:59.523997 1915516 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1222 00:26:59.524001 1915516 command_runner.go:130] > # drop_infra_ctr = true
	I1222 00:26:59.524011 1915516 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1222 00:26:59.524017 1915516 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1222 00:26:59.524025 1915516 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1222 00:26:59.524033 1915516 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1222 00:26:59.524041 1915516 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1222 00:26:59.524048 1915516 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1222 00:26:59.524053 1915516 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1222 00:26:59.524059 1915516 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1222 00:26:59.524063 1915516 command_runner.go:130] > # shared_cpuset = ""
	I1222 00:26:59.524068 1915516 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1222 00:26:59.524074 1915516 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1222 00:26:59.524078 1915516 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1222 00:26:59.524086 1915516 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1222 00:26:59.524090 1915516 command_runner.go:130] > # pinns_path = ""
	I1222 00:26:59.524095 1915516 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1222 00:26:59.524101 1915516 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1222 00:26:59.524106 1915516 command_runner.go:130] > # enable_criu_support = true
	I1222 00:26:59.524111 1915516 command_runner.go:130] > # Enable/disable the generation of the container,
	I1222 00:26:59.524117 1915516 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1222 00:26:59.524121 1915516 command_runner.go:130] > # enable_pod_events = false
	I1222 00:26:59.524128 1915516 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1222 00:26:59.524133 1915516 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1222 00:26:59.524137 1915516 command_runner.go:130] > # default_runtime = "crun"
	I1222 00:26:59.524142 1915516 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1222 00:26:59.524150 1915516 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1222 00:26:59.524159 1915516 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1222 00:26:59.524164 1915516 command_runner.go:130] > # creation as a file is not desired either.
	I1222 00:26:59.524173 1915516 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1222 00:26:59.524178 1915516 command_runner.go:130] > # the hostname is being managed dynamically.
	I1222 00:26:59.524182 1915516 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1222 00:26:59.524185 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.524191 1915516 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1222 00:26:59.524198 1915516 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1222 00:26:59.524204 1915516 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1222 00:26:59.524209 1915516 command_runner.go:130] > # Each entry in the table should follow the format:
	I1222 00:26:59.524212 1915516 command_runner.go:130] > #
	I1222 00:26:59.524217 1915516 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1222 00:26:59.524221 1915516 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1222 00:26:59.524226 1915516 command_runner.go:130] > # runtime_type = "oci"
	I1222 00:26:59.524230 1915516 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1222 00:26:59.524236 1915516 command_runner.go:130] > # inherit_default_runtime = false
	I1222 00:26:59.524246 1915516 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1222 00:26:59.524250 1915516 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1222 00:26:59.524255 1915516 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1222 00:26:59.524259 1915516 command_runner.go:130] > # monitor_env = []
	I1222 00:26:59.524264 1915516 command_runner.go:130] > # privileged_without_host_devices = false
	I1222 00:26:59.524268 1915516 command_runner.go:130] > # allowed_annotations = []
	I1222 00:26:59.524274 1915516 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1222 00:26:59.524277 1915516 command_runner.go:130] > # no_sync_log = false
	I1222 00:26:59.524281 1915516 command_runner.go:130] > # default_annotations = {}
	I1222 00:26:59.524285 1915516 command_runner.go:130] > # stream_websockets = false
	I1222 00:26:59.524289 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.524337 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.524343 1915516 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1222 00:26:59.524349 1915516 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1222 00:26:59.524356 1915516 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1222 00:26:59.524362 1915516 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1222 00:26:59.524365 1915516 command_runner.go:130] > #   in $PATH.
	I1222 00:26:59.524371 1915516 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1222 00:26:59.524376 1915516 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1222 00:26:59.524382 1915516 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1222 00:26:59.524386 1915516 command_runner.go:130] > #   state.
	I1222 00:26:59.524392 1915516 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1222 00:26:59.524399 1915516 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1222 00:26:59.524405 1915516 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1222 00:26:59.524411 1915516 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1222 00:26:59.524417 1915516 command_runner.go:130] > #   the values from the default runtime on load time.
	I1222 00:26:59.524423 1915516 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1222 00:26:59.524429 1915516 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1222 00:26:59.524435 1915516 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1222 00:26:59.524441 1915516 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1222 00:26:59.524446 1915516 command_runner.go:130] > #   The currently recognized values are:
	I1222 00:26:59.524453 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1222 00:26:59.524460 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1222 00:26:59.524466 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1222 00:26:59.524472 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1222 00:26:59.524479 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1222 00:26:59.524489 1915516 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1222 00:26:59.524496 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1222 00:26:59.524502 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1222 00:26:59.524508 1915516 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1222 00:26:59.524514 1915516 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1222 00:26:59.524521 1915516 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1222 00:26:59.524528 1915516 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1222 00:26:59.524534 1915516 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1222 00:26:59.524540 1915516 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1222 00:26:59.524546 1915516 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1222 00:26:59.524554 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1222 00:26:59.524560 1915516 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1222 00:26:59.524564 1915516 command_runner.go:130] > #   deprecated option "conmon".
	I1222 00:26:59.524571 1915516 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1222 00:26:59.524576 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1222 00:26:59.524583 1915516 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1222 00:26:59.524587 1915516 command_runner.go:130] > #   should be moved to the container's cgroup
	I1222 00:26:59.524594 1915516 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1222 00:26:59.524599 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1222 00:26:59.524606 1915516 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1222 00:26:59.524610 1915516 command_runner.go:130] > #   conmon-rs by using:
	I1222 00:26:59.524617 1915516 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1222 00:26:59.524625 1915516 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1222 00:26:59.524633 1915516 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1222 00:26:59.524640 1915516 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1222 00:26:59.524645 1915516 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1222 00:26:59.524652 1915516 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1222 00:26:59.524660 1915516 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1222 00:26:59.524665 1915516 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1222 00:26:59.524673 1915516 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1222 00:26:59.524683 1915516 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1222 00:26:59.524688 1915516 command_runner.go:130] > #   when a machine crash happens.
	I1222 00:26:59.524695 1915516 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1222 00:26:59.524703 1915516 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1222 00:26:59.524711 1915516 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1222 00:26:59.524715 1915516 command_runner.go:130] > #   seccomp profile for the runtime.
	I1222 00:26:59.524721 1915516 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1222 00:26:59.524729 1915516 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1222 00:26:59.524732 1915516 command_runner.go:130] > #
	I1222 00:26:59.524737 1915516 command_runner.go:130] > # Using the seccomp notifier feature:
	I1222 00:26:59.524739 1915516 command_runner.go:130] > #
	I1222 00:26:59.524745 1915516 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1222 00:26:59.524752 1915516 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1222 00:26:59.524755 1915516 command_runner.go:130] > #
	I1222 00:26:59.524761 1915516 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1222 00:26:59.524768 1915516 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1222 00:26:59.524770 1915516 command_runner.go:130] > #
	I1222 00:26:59.524776 1915516 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1222 00:26:59.524780 1915516 command_runner.go:130] > # feature.
	I1222 00:26:59.524783 1915516 command_runner.go:130] > #
	I1222 00:26:59.524789 1915516 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1222 00:26:59.524795 1915516 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1222 00:26:59.524801 1915516 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1222 00:26:59.524807 1915516 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1222 00:26:59.524813 1915516 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1222 00:26:59.524815 1915516 command_runner.go:130] > #
	I1222 00:26:59.524834 1915516 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1222 00:26:59.524841 1915516 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1222 00:26:59.524844 1915516 command_runner.go:130] > #
	I1222 00:26:59.524855 1915516 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1222 00:26:59.524861 1915516 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1222 00:26:59.524865 1915516 command_runner.go:130] > #
	I1222 00:26:59.524871 1915516 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1222 00:26:59.524877 1915516 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1222 00:26:59.524880 1915516 command_runner.go:130] > # limitation.
	I1222 00:26:59.524884 1915516 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1222 00:26:59.524890 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1222 00:26:59.524894 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524900 1915516 command_runner.go:130] > runtime_root = "/run/crun"
	I1222 00:26:59.524904 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524908 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524912 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524917 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524921 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524924 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524931 1915516 command_runner.go:130] > allowed_annotations = [
	I1222 00:26:59.524936 1915516 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1222 00:26:59.524939 1915516 command_runner.go:130] > ]
	I1222 00:26:59.524944 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524948 1915516 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1222 00:26:59.524952 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1222 00:26:59.524955 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524959 1915516 command_runner.go:130] > runtime_root = "/run/runc"
	I1222 00:26:59.524964 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524967 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524972 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524976 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524980 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524984 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524988 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524995 1915516 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1222 00:26:59.525000 1915516 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1222 00:26:59.525007 1915516 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1222 00:26:59.525015 1915516 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1222 00:26:59.525027 1915516 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1222 00:26:59.525037 1915516 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1222 00:26:59.525044 1915516 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1222 00:26:59.525050 1915516 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1222 00:26:59.525059 1915516 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1222 00:26:59.525068 1915516 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1222 00:26:59.525074 1915516 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1222 00:26:59.525081 1915516 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1222 00:26:59.525085 1915516 command_runner.go:130] > # Example:
	I1222 00:26:59.525089 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1222 00:26:59.525095 1915516 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1222 00:26:59.525100 1915516 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1222 00:26:59.525108 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1222 00:26:59.525111 1915516 command_runner.go:130] > # cpuset = "0-1"
	I1222 00:26:59.525115 1915516 command_runner.go:130] > # cpushares = "5"
	I1222 00:26:59.525119 1915516 command_runner.go:130] > # cpuquota = "1000"
	I1222 00:26:59.525122 1915516 command_runner.go:130] > # cpuperiod = "100000"
	I1222 00:26:59.525126 1915516 command_runner.go:130] > # cpulimit = "35"
	I1222 00:26:59.525129 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.525133 1915516 command_runner.go:130] > # The workload name is workload-type.
	I1222 00:26:59.525141 1915516 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1222 00:26:59.525147 1915516 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1222 00:26:59.525153 1915516 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1222 00:26:59.525161 1915516 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1222 00:26:59.525166 1915516 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1222 00:26:59.525171 1915516 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1222 00:26:59.525178 1915516 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1222 00:26:59.525182 1915516 command_runner.go:130] > # Default value is set to true
	I1222 00:26:59.525187 1915516 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1222 00:26:59.525192 1915516 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1222 00:26:59.525197 1915516 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1222 00:26:59.525201 1915516 command_runner.go:130] > # Default value is set to 'false'
	I1222 00:26:59.525205 1915516 command_runner.go:130] > # disable_hostport_mapping = false
	I1222 00:26:59.525211 1915516 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1222 00:26:59.525219 1915516 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1222 00:26:59.525222 1915516 command_runner.go:130] > # timezone = ""
	I1222 00:26:59.525229 1915516 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1222 00:26:59.525231 1915516 command_runner.go:130] > #
	I1222 00:26:59.525238 1915516 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1222 00:26:59.525244 1915516 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1222 00:26:59.525248 1915516 command_runner.go:130] > [crio.image]
	I1222 00:26:59.525254 1915516 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1222 00:26:59.525258 1915516 command_runner.go:130] > # default_transport = "docker://"
	I1222 00:26:59.525363 1915516 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1222 00:26:59.525377 1915516 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525381 1915516 command_runner.go:130] > # global_auth_file = ""
	I1222 00:26:59.525392 1915516 command_runner.go:130] > # The image used to instantiate infra containers.
	I1222 00:26:59.525398 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525403 1915516 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.525414 1915516 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1222 00:26:59.525425 1915516 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525432 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525436 1915516 command_runner.go:130] > # pause_image_auth_file = ""
	I1222 00:26:59.525442 1915516 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1222 00:26:59.525450 1915516 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1222 00:26:59.525460 1915516 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1222 00:26:59.525466 1915516 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1222 00:26:59.525474 1915516 command_runner.go:130] > # pause_command = "/pause"
	I1222 00:26:59.525480 1915516 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1222 00:26:59.525491 1915516 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1222 00:26:59.525499 1915516 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1222 00:26:59.525509 1915516 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1222 00:26:59.525515 1915516 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1222 00:26:59.525521 1915516 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1222 00:26:59.525527 1915516 command_runner.go:130] > # pinned_images = [
	I1222 00:26:59.525530 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525536 1915516 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1222 00:26:59.525545 1915516 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1222 00:26:59.525557 1915516 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1222 00:26:59.525563 1915516 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1222 00:26:59.525572 1915516 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1222 00:26:59.525578 1915516 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1222 00:26:59.525584 1915516 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1222 00:26:59.525594 1915516 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1222 00:26:59.525601 1915516 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1222 00:26:59.525608 1915516 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1222 00:26:59.525616 1915516 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1222 00:26:59.525621 1915516 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1222 00:26:59.525627 1915516 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1222 00:26:59.525634 1915516 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1222 00:26:59.525642 1915516 command_runner.go:130] > # changing them here.
	I1222 00:26:59.525648 1915516 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1222 00:26:59.525652 1915516 command_runner.go:130] > # insecure_registries = [
	I1222 00:26:59.525659 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525666 1915516 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1222 00:26:59.525676 1915516 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1222 00:26:59.525684 1915516 command_runner.go:130] > # image_volumes = "mkdir"
	I1222 00:26:59.525692 1915516 command_runner.go:130] > # Temporary directory to use for storing big files
	I1222 00:26:59.525697 1915516 command_runner.go:130] > # big_files_temporary_dir = ""
	I1222 00:26:59.525703 1915516 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1222 00:26:59.525711 1915516 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1222 00:26:59.525717 1915516 command_runner.go:130] > # auto_reload_registries = false
	I1222 00:26:59.525724 1915516 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1222 00:26:59.525734 1915516 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1222 00:26:59.525744 1915516 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1222 00:26:59.525748 1915516 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1222 00:26:59.525770 1915516 command_runner.go:130] > # The mode of short name resolution.
	I1222 00:26:59.525780 1915516 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1222 00:26:59.525788 1915516 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1222 00:26:59.525794 1915516 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1222 00:26:59.525800 1915516 command_runner.go:130] > # short_name_mode = "enforcing"
	I1222 00:26:59.525807 1915516 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1222 00:26:59.525813 1915516 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1222 00:26:59.525821 1915516 command_runner.go:130] > # oci_artifact_mount_support = true
	I1222 00:26:59.525829 1915516 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1222 00:26:59.525836 1915516 command_runner.go:130] > # CNI plugins.
	I1222 00:26:59.525843 1915516 command_runner.go:130] > [crio.network]
	I1222 00:26:59.525850 1915516 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1222 00:26:59.525859 1915516 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1222 00:26:59.525863 1915516 command_runner.go:130] > # cni_default_network = ""
	I1222 00:26:59.525869 1915516 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1222 00:26:59.525877 1915516 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1222 00:26:59.525883 1915516 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1222 00:26:59.525887 1915516 command_runner.go:130] > # plugin_dirs = [
	I1222 00:26:59.525891 1915516 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1222 00:26:59.525897 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525903 1915516 command_runner.go:130] > # List of included pod metrics.
	I1222 00:26:59.525908 1915516 command_runner.go:130] > # included_pod_metrics = [
	I1222 00:26:59.525913 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525920 1915516 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1222 00:26:59.525926 1915516 command_runner.go:130] > [crio.metrics]
	I1222 00:26:59.525932 1915516 command_runner.go:130] > # Globally enable or disable metrics support.
	I1222 00:26:59.525936 1915516 command_runner.go:130] > # enable_metrics = false
	I1222 00:26:59.525945 1915516 command_runner.go:130] > # Specify enabled metrics collectors.
	I1222 00:26:59.525950 1915516 command_runner.go:130] > # Per default all metrics are enabled.
	I1222 00:26:59.525957 1915516 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1222 00:26:59.525969 1915516 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1222 00:26:59.525976 1915516 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1222 00:26:59.525981 1915516 command_runner.go:130] > # metrics_collectors = [
	I1222 00:26:59.525987 1915516 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1222 00:26:59.525992 1915516 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1222 00:26:59.525998 1915516 command_runner.go:130] > # 	"containers_oom_total",
	I1222 00:26:59.526006 1915516 command_runner.go:130] > # 	"processes_defunct",
	I1222 00:26:59.526010 1915516 command_runner.go:130] > # 	"operations_total",
	I1222 00:26:59.526014 1915516 command_runner.go:130] > # 	"operations_latency_seconds",
	I1222 00:26:59.526019 1915516 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1222 00:26:59.526023 1915516 command_runner.go:130] > # 	"operations_errors_total",
	I1222 00:26:59.526028 1915516 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1222 00:26:59.526033 1915516 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1222 00:26:59.526037 1915516 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1222 00:26:59.526041 1915516 command_runner.go:130] > # 	"image_pulls_success_total",
	I1222 00:26:59.526045 1915516 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1222 00:26:59.526050 1915516 command_runner.go:130] > # 	"containers_oom_count_total",
	I1222 00:26:59.526055 1915516 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1222 00:26:59.526059 1915516 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1222 00:26:59.526064 1915516 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1222 00:26:59.526066 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526080 1915516 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1222 00:26:59.526084 1915516 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1222 00:26:59.526089 1915516 command_runner.go:130] > # The port on which the metrics server will listen.
	I1222 00:26:59.526093 1915516 command_runner.go:130] > # metrics_port = 9090
	I1222 00:26:59.526098 1915516 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1222 00:26:59.526102 1915516 command_runner.go:130] > # metrics_socket = ""
	I1222 00:26:59.526107 1915516 command_runner.go:130] > # The certificate for the secure metrics server.
	I1222 00:26:59.526118 1915516 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1222 00:26:59.526125 1915516 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1222 00:26:59.526130 1915516 command_runner.go:130] > # certificate on any modification event.
	I1222 00:26:59.526134 1915516 command_runner.go:130] > # metrics_cert = ""
	I1222 00:26:59.526139 1915516 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1222 00:26:59.526148 1915516 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1222 00:26:59.526151 1915516 command_runner.go:130] > # metrics_key = ""
	I1222 00:26:59.526157 1915516 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1222 00:26:59.526163 1915516 command_runner.go:130] > [crio.tracing]
	I1222 00:26:59.526169 1915516 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1222 00:26:59.526177 1915516 command_runner.go:130] > # enable_tracing = false
	I1222 00:26:59.526182 1915516 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1222 00:26:59.526187 1915516 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1222 00:26:59.526197 1915516 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1222 00:26:59.526205 1915516 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1222 00:26:59.526213 1915516 command_runner.go:130] > # CRI-O NRI configuration.
	I1222 00:26:59.526218 1915516 command_runner.go:130] > [crio.nri]
	I1222 00:26:59.526222 1915516 command_runner.go:130] > # Globally enable or disable NRI.
	I1222 00:26:59.526226 1915516 command_runner.go:130] > # enable_nri = true
	I1222 00:26:59.526230 1915516 command_runner.go:130] > # NRI socket to listen on.
	I1222 00:26:59.526238 1915516 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1222 00:26:59.526242 1915516 command_runner.go:130] > # NRI plugin directory to use.
	I1222 00:26:59.526247 1915516 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1222 00:26:59.526255 1915516 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1222 00:26:59.526264 1915516 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1222 00:26:59.526270 1915516 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1222 00:26:59.526331 1915516 command_runner.go:130] > # nri_disable_connections = false
	I1222 00:26:59.526342 1915516 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1222 00:26:59.526347 1915516 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1222 00:26:59.526352 1915516 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1222 00:26:59.526359 1915516 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1222 00:26:59.526364 1915516 command_runner.go:130] > # NRI default validator configuration.
	I1222 00:26:59.526374 1915516 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1222 00:26:59.526381 1915516 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1222 00:26:59.526389 1915516 command_runner.go:130] > # can be restricted/rejected:
	I1222 00:26:59.526393 1915516 command_runner.go:130] > # - OCI hook injection
	I1222 00:26:59.526399 1915516 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1222 00:26:59.526406 1915516 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1222 00:26:59.526411 1915516 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1222 00:26:59.526416 1915516 command_runner.go:130] > # - adjustment of linux namespaces
	I1222 00:26:59.526425 1915516 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1222 00:26:59.526435 1915516 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1222 00:26:59.526442 1915516 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1222 00:26:59.526448 1915516 command_runner.go:130] > #
	I1222 00:26:59.526452 1915516 command_runner.go:130] > # [crio.nri.default_validator]
	I1222 00:26:59.526457 1915516 command_runner.go:130] > # nri_enable_default_validator = false
	I1222 00:26:59.526467 1915516 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1222 00:26:59.526474 1915516 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1222 00:26:59.526485 1915516 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1222 00:26:59.526492 1915516 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1222 00:26:59.526500 1915516 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1222 00:26:59.526505 1915516 command_runner.go:130] > # nri_validator_required_plugins = [
	I1222 00:26:59.526510 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526516 1915516 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1222 00:26:59.526525 1915516 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1222 00:26:59.526530 1915516 command_runner.go:130] > [crio.stats]
	I1222 00:26:59.526543 1915516 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1222 00:26:59.526549 1915516 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1222 00:26:59.526556 1915516 command_runner.go:130] > # stats_collection_period = 0
	I1222 00:26:59.526563 1915516 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1222 00:26:59.526573 1915516 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1222 00:26:59.526577 1915516 command_runner.go:130] > # collection_period = 0
	I1222 00:26:59.526772 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.4923315Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1222 00:26:59.526793 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492368561Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1222 00:26:59.526804 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492394456Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1222 00:26:59.526811 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492418381Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1222 00:26:59.526829 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.49249673Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:59.526841 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492972301Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1222 00:26:59.526852 1915516 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1222 00:26:59.526929 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:59.526945 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:59.526969 1915516 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:26:59.526995 1915516 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:26:59.527121 1915516 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:26:59.527200 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:26:59.535433 1915516 command_runner.go:130] > kubeadm
	I1222 00:26:59.535512 1915516 command_runner.go:130] > kubectl
	I1222 00:26:59.535531 1915516 command_runner.go:130] > kubelet
	I1222 00:26:59.535576 1915516 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:26:59.535650 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:26:59.543039 1915516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:26:59.555537 1915516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:26:59.568459 1915516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 00:26:59.580757 1915516 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:26:59.584332 1915516 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:26:59.584456 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.724165 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:00.346235 1915516 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:27:00.346307 1915516 certs.go:195] generating shared ca certs ...
	I1222 00:27:00.346339 1915516 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:00.346533 1915516 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:27:00.346648 1915516 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:27:00.346710 1915516 certs.go:257] generating profile certs ...
	I1222 00:27:00.346876 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:27:00.347136 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:27:00.347262 1915516 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:27:00.347310 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:27:00.347347 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:27:00.347391 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:27:00.347455 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:27:00.347514 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:27:00.347563 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:27:00.347599 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:27:00.347655 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:27:00.347758 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:27:00.347824 1915516 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:27:00.347874 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:27:00.347933 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:27:00.347992 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:27:00.348063 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:27:00.348147 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:27:00.348210 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem -> /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.348254 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.348294 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.349116 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:27:00.373997 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:27:00.395587 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:27:00.415350 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:27:00.433576 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:27:00.451786 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:27:00.469999 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:27:00.488347 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:27:00.506732 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:27:00.524546 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:27:00.542404 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:27:00.559977 1915516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:27:00.572924 1915516 ssh_runner.go:195] Run: openssl version
	I1222 00:27:00.579133 1915516 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:27:00.579568 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.587905 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:27:00.596398 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600098 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600361 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600432 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.641247 1915516 command_runner.go:130] > 51391683
	I1222 00:27:00.641737 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:27:00.649412 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.657021 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:27:00.664905 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668819 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668897 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668950 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.710718 1915516 command_runner.go:130] > 3ec20f2e
	I1222 00:27:00.711212 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:27:00.718755 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.725802 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:27:00.733002 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736446 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736472 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736534 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.776933 1915516 command_runner.go:130] > b5213941
	I1222 00:27:00.777458 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:27:00.784943 1915516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788657 1915516 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788682 1915516 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:27:00.788688 1915516 command_runner.go:130] > Device: 259,1	Inode: 1585688     Links: 1
	I1222 00:27:00.788695 1915516 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:27:00.788729 1915516 command_runner.go:130] > Access: 2025-12-22 00:22:53.052243967 +0000
	I1222 00:27:00.788738 1915516 command_runner.go:130] > Modify: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788744 1915516 command_runner.go:130] > Change: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788752 1915516 command_runner.go:130] >  Birth: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788843 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:27:00.834300 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.834770 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:27:00.875032 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.875494 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:27:00.916072 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.916544 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:27:00.957168 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.957608 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:27:01.005250 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.005855 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:27:01.046783 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.047595 1915516 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:27:01.047694 1915516 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:27:01.047794 1915516 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:27:01.074422 1915516 cri.go:96] found id: ""
	I1222 00:27:01.074566 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:27:01.081706 1915516 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:27:01.081729 1915516 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:27:01.081736 1915516 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:27:01.082711 1915516 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:27:01.082752 1915516 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:27:01.082809 1915516 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:27:01.090505 1915516 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:27:01.090899 1915516 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-837733" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091003 1915516 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "functional-837733" cluster setting kubeconfig missing "functional-837733" context setting]
	I1222 00:27:01.091279 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.091694 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091853 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.092420 1915516 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:27:01.092441 1915516 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:27:01.092469 1915516 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:27:01.092508 1915516 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:27:01.092519 1915516 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:27:01.092524 1915516 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:27:01.092821 1915516 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:27:01.103318 1915516 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:27:01.103353 1915516 kubeadm.go:602] duration metric: took 20.58853ms to restartPrimaryControlPlane
	I1222 00:27:01.103366 1915516 kubeadm.go:403] duration metric: took 55.780111ms to StartCluster
	I1222 00:27:01.103381 1915516 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.103449 1915516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.104102 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.104313 1915516 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:27:01.104706 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:27:01.104752 1915516 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:27:01.104888 1915516 addons.go:70] Setting storage-provisioner=true in profile "functional-837733"
	I1222 00:27:01.104908 1915516 addons.go:239] Setting addon storage-provisioner=true in "functional-837733"
	I1222 00:27:01.104931 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.105709 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.105852 1915516 addons.go:70] Setting default-storageclass=true in profile "functional-837733"
	I1222 00:27:01.105876 1915516 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-837733"
	I1222 00:27:01.106124 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.111206 1915516 out.go:179] * Verifying Kubernetes components...
	I1222 00:27:01.114015 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:27:01.149988 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.150406 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.150740 1915516 addons.go:239] Setting addon default-storageclass=true in "functional-837733"
	I1222 00:27:01.150781 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.151633 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.160017 1915516 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:27:01.163813 1915516 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.163840 1915516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:27:01.163933 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.208933 1915516 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:01.208958 1915516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:27:01.209032 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.220299 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.259895 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.324483 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:01.352138 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.392769 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.095697 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.095745 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095787 1915516 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095683 1915516 node_ready.go:35] waiting up to 6m0s for node "functional-837733" to be "Ready" ...
	I1222 00:27:02.095893 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.095958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.096162 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.096189 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.096261 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.229628 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.250171 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.299403 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.303146 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.325073 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.325129 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.596516 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.596602 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.596960 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.700363 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.757664 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.757704 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.778883 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.845712 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.845792 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.096069 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.096495 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.596210 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.596328 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.596586 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.599774 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:03.608220 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:03.704291 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.704413 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.724087 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.724140 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.096645 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.096746 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.097060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.097110 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:04.395547 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:04.461482 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.461526 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.596939 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.597016 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.597354 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.908718 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:04.967177 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.967217 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:05.096456 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.096540 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.096895 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.596711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.096915 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.096990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:06.097365 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:06.122623 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:06.180609 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.181800 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 00:27:06.185073 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.244148 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:06.244195 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.596862 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.596990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.597266 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.095976 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.096054 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.596061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.596391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.650649 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:07.682081 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:07.723708 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.723755 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:07.756657 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.756702 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:08.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.096407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.596459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:08.596512 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:09.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.500982 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:09.557414 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:09.557460 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:09.596618 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.596687 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.596949 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.096786 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.096870 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.407596 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:10.466808 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:10.466847 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:10.596231 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.596316 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:10.596666 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:11.096583 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.096658 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.596951 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.597033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.597361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.095930 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.096006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.096359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.596103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.096064 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.096469 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:13.110706 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:13.162639 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:13.165994 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:13.596604 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.597025 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.096902 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.097221 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.596331 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.007048 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:15.075847 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:15.075897 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:15.095981 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.096063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.596111 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.596199 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.596504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.096151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.596451 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.596873 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.096633 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.096711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.097034 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.256498 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:17.320974 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:17.324801 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:17.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.596544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:17.596599 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.096285 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.096359 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.096666 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.596351 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.596417 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.596709 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.596186 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.596288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.596608 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:19.596658 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:20.095993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.596149 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.096326 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.096409 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.096724 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.596667 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.596736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.597043 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:21.597093 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:22.096478 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.096951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.596629 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.596721 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.597082 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.096795 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.096876 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.097130 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.596917 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.596995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.597294 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.941866 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:23.998448 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:23.998497 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:23.998521 1915516 retry.go:84] will retry after 7.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:24.096775 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.096858 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.097179 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.596921 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.596997 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.597304 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.096528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.596241 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.596313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.596669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.096357 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.096423 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.096702 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:26.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.596861 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.597278 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.096413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.596520 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:28.596577 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:29.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.096085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.096330 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.596121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.096205 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.096297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.096674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.321067 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:30.379433 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:30.379480 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.379505 1915516 retry.go:84] will retry after 9.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.596804 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.596878 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.597200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.597254 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:31.096701 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.096780 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.097109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.231597 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:31.293911 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:31.293951 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:31.596615 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.096660 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.596967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.597283 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.597352 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:33.096960 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.097032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.097391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.596091 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.596164 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.596230 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.596325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.596664 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.096343 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.096412 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.096685 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:35.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.096033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.596412 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.596479 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.596727 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.096555 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.096628 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.096973 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:37.097038 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.596788 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.596867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.095946 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.096029 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.096335 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.596466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.096281 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.096646 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.596430 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:40.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.096154 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.098739 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:40.155643 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:40.159357 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.159395 1915516 retry.go:84] will retry after 15s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.096329 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.096400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.096688 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.596638 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.596757 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.597079 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.597130 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:42.096986 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.097069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.101091 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1222 00:27:42.596907 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.596992 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.597277 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.596165 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.096105 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.096430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:44.096472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.596127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.096194 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.096288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.596379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.096150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:46.096534 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.458265 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:46.518675 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:46.518729 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.518755 1915516 retry.go:84] will retry after 28.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.596931 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.597020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.597346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.096075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.596079 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.096276 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.096625 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.096740 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.596414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.596137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.096214 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.096306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.596061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.596464 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.596521 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:51.096372 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.096448 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.596622 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.596716 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.096857 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.096927 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.097248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.096378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:53.096448 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.596524 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.096119 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.096565 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.596116 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.096066 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.096178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:55.096580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.200777 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:55.265479 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:55.265520 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.265546 1915516 retry.go:84] will retry after 39.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.595965 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.596049 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.596359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.095950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.096027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.096297 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.596381 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.596461 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.596751 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.096601 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.096888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.096932 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.596637 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.596710 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.596964 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.096723 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.096792 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.097169 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.596957 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.597032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.597316 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.596476 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:00.096143 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.096238 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.596214 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.596306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.596559 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.097205 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.596188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.596261 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.596603 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.596670 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.096310 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.096394 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.096715 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.596419 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.596515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.596851 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.096613 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.096689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.097037 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.596789 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.596864 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.597113 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.597153 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.096886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.097257 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.595963 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.596038 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.596345 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.596020 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.096257 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.096620 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:06.096682 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.596417 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.596491 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.596755 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.596270 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.596617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:08.596475 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:09.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.596234 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.096409 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.096501 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.096849 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:11.096946 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:11.596858 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.596941 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.597260 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.097034 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.097128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.097525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.596078 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.596158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:13.596572 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:14.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.096071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.678913 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:14.737274 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:14.740868 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:14.740903 1915516 retry.go:84] will retry after 44.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:15.096479 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.096556 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.096856 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.596004 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.596400 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.096510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.096575 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:16.596327 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.596405 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.096642 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.096912 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.596695 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.596773 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.597104 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.096925 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.097003 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.097359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:18.097417 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:18.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.596361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.096435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.596211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.596552 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.096244 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.596453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:20.596506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:21.096297 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.096367 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.096669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:21.596745 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.597117 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.096947 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.097021 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.097379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.596086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.596159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:22.596545 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:23.096015 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.096324 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:23.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.596150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.096052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.096138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.596263 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.596536 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:24.596589 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:25.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:25.596179 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.596271 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.596578 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.096014 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.096355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.596544 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.596874 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:26.596933 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:27.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.097066 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:27.596687 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.596753 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.597008 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.096768 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.096839 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:29.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.096076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.096374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:29.096425 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:29.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.596557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.096284 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.096366 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.096689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.596008 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.596351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:31.096276 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.096353 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.096679 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:31.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:31.596743 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.596816 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.096778 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.096851 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.097100 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.596895 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.596983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.597299 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.096044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.596138 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.596502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:33.596552 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:34.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.096436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:34.454097 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:28:34.514142 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514187 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514269 1915516 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:34.596441 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.596529 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.596855 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.096655 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.096737 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.097056 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.596959 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.597310 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:35.597390 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:36.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.096484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:36.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.596568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.096447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:38.096490 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:38.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.096163 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.596431 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:40.096080 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.096167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:40.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:40.596326 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.596447 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.596781 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.096558 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.096630 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.096876 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.596824 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.597258 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:42.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.096146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.097461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1222 00:28:42.097586 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:42.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.096540 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.596134 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.596546 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.596128 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.596215 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.596558 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:44.596612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:45.096183 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.096278 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.096726 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:45.596446 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.596802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.096553 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.096624 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.096959 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.596832 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:46.597275 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:47.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.095989 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.096242 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:47.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.596082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.596368 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.096094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.096173 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:49.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.096118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.096389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:49.096433 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:49.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.596170 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.596486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.096103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.096376 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:51.096359 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.096467 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.096794 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:51.096869 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:51.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.596678 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.596972 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.096756 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.096859 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.097229 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.595971 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.596373 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.096394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.596060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:53.596547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:54.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.096525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:54.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.596398 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.596109 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.596510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:56.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.096414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:56.096453 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:56.596382 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.596455 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.596777 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.096597 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.096690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.097021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.596765 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.596835 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.597084 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:58.096904 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.096994 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.097340 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:58.097391 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:58.596044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.596153 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.061108 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:59.096537 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.096605 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.096887 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.132532 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138577 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138673 1915516 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:59.141746 1915516 out.go:179] * Enabled addons: 
	I1222 00:28:59.145405 1915516 addons.go:530] duration metric: took 1m58.040642899s for enable addons: enabled=[]
	I1222 00:28:59.596293 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.596374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.596770 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.096670 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.097201 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.595928 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.596014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:00.596370 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:01.096378 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.096452 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.096753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:01.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.596676 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.596994 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.096700 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.096774 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.097108 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.596912 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.596985 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.597314 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:02.597388 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:03.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:03.596014 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.596079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.096091 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.596416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:05.095995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:05.096466 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:05.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.096196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.096324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.096662 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.596476 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.596558 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.596812 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:07.096580 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.096655 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.096995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:07.097052 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:07.596669 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.597087 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.096888 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.096974 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.595949 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.596080 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.095985 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.096064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.096372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.596027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.596300 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:09.596339 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:10.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.096125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.096502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:10.596205 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.596303 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.596585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.096568 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.096641 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.096925 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.596903 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.596980 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.597315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:11.597389 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:12.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.096485 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:12.595972 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.596042 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.596348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.596162 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.596244 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.596528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:14.096197 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:14.096594 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:14.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.596443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.096585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.596256 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.596324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.596577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.596455 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.596531 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.596865 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:16.596917 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.096729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:17.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.596427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.596236 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.596315 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.596584 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:19.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:19.096504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:19.596189 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.596268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.596595 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.096286 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.096362 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:21.096400 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.096769 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:21.096815 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:21.596733 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.597125 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.096830 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.096913 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.097244 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.596138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.596522 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:23.596485 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:24.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.096081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:24.596151 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.596506 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.096466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.596494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:25.596557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:26.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.096391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:26.596314 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.596400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.596729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.096554 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.096633 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.097006 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.596775 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.596844 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:27.597137 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:28.096938 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.097014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.097401 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:28.595911 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.595983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.596321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.095997 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.096068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.096361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:30.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.096489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:30.096547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:30.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.596327 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.096344 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.096420 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.596746 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.596817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.597107 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:32.096894 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.096967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:32.097281 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:32.596869 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.596969 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.597275 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.096111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.596315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.096102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.096423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.596130 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:34.596568 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:35.096117 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:35.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.596074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.596449 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.596524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.596791 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:36.596841 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:37.096628 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.096714 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.097074 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:37.596870 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.596949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.597264 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.096987 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.097079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.097412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.596255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:39.096335 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.096415 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.096735 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:39.096791 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:39.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.596081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.596386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.096071 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.096512 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.596469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.096427 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.096493 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.096765 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.596776 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.596860 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.597207 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:41.597262 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:42.097196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.097290 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.097753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:42.596494 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.596559 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.596818 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.096412 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.096485 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.096813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.596584 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.596660 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.596988 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:44.096629 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.096696 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:44.097007 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:44.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.596896 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.597195 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.097028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.097117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.097583 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.596273 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.596360 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.596616 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.096043 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:46.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:47.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.096065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.096384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:47.596069 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.096477 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.596372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:49.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:49.096506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:49.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.596446 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.096191 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.596046 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:51.096262 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.096340 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.096671 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:51.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:51.596674 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.597007 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.096802 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.096883 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.097238 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.596097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:53.596472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:54.096190 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.096617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:54.595974 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.596313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.096097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.596182 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.596258 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.596563 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:55.596609 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:56.096131 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:56.596450 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.596522 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.596813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:58.096113 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.096201 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:58.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:58.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.096348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:00.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.096548 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:00.096612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:00.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.096631 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.096833 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.596402 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.596486 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.596923 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:02.096741 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.096817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.097099 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:02.097145 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:02.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.596554 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.596891 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.096678 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.096749 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.097049 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.596893 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.597160 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:04.096926 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.097001 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.097385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:04.097441 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:04.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.096136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.596009 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.596350 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:06.596396 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:07.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:07.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.596115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.096165 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:08.596499 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:09.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.096074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:09.596090 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.596516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.096073 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.096161 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:11.096517 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.096604 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:11.097278 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:11.596969 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.597372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.096147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.596282 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.596358 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.096032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.096445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.596479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:13.596523 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:14.096010 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.096415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:14.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.096501 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.596363 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:16.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:16.096639 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:16.596380 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.596453 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.596745 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.096475 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.096802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.596630 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.596731 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.597138 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:18.096998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.097081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.097420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:18.097478 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:18.596096 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.096545 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.096076 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:20.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:21.096445 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.096524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.096840 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:21.596761 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.596832 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.597105 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.096883 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.096964 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.097271 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.596982 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.597056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.597440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:22.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:23.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.596157 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.596226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:25.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:25.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:25.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.596100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.596424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.096120 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.096534 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.596546 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.596888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:27.096591 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.096677 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.097053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:27.097108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:27.596848 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.596955 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.596475 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.096142 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.096209 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:29.596527 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:30.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:30.596200 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.596525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.096447 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.096541 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.596722 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.596793 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.597128 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:31.597181 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:32.096921 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.096995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:32.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.596177 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.596451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:34.096133 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.096208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:34.096604 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:34.596306 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.596398 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.596763 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.096549 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.096621 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.096956 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.596693 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.596770 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:36.096908 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.096988 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.097351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:36.097407 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:36.596345 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.596419 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.596680 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.096197 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.596567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:38.596620 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:39.096295 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.096370 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.596381 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.096092 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.596269 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.596348 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.596667 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:40.596717 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:41.096474 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.096825 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:41.596734 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.596808 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.597164 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.096930 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.097022 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.596367 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:43.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:43.096482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:43.596115 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.596196 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.596500 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.096056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.096329 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.595987 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.596060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:45.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.096182 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.096632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:45.096698 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:45.596155 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.596221 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.096083 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.596090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.096110 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.096527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.596194 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.596265 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.596618 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:47.596676 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:48.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.096139 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:48.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.596328 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.096096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.596175 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.596561 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:50.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.096061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.096371 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:50.096418 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:50.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.596449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.096398 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.096473 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.096792 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.596692 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.596763 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.597029 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:52.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.096954 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.097313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:52.097385 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:52.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.596133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.095986 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.096060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.096450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.596389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:54.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:55.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.096554 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:55.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.596140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.096255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.596413 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.596495 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.597103 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:56.597155 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:57.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.096939 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.097279 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:57.596919 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.596998 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.597248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.096981 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.097060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.595936 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.596020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.596366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:59.096048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:59.096443 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:59.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.596437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.112227 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.112324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.112677 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.596304 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.596376 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.596639 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:01.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.096650 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.096999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:01.097053 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:01.596950 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.597377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.096463 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.096586 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.096933 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.596593 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.596668 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.597021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:03.096676 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.096752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.097161 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:03.097215 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:03.596799 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.596879 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.597210 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.095999 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.596068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.096006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.096369 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.596027 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.596106 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.596440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:05.596494 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:06.096082 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.096460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:06.596324 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.596435 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.596689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.596220 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.596550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:07.596603 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:08.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.096360 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:08.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.096598 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.596123 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.596195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:10.096085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.096519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:10.096576 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:10.596225 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.596297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.096706 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.097010 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.596932 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.597006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.597391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.096467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:12.596356 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:13.096021 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:13.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.096148 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.096479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.596057 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:14.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:15.096068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.096144 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:15.596144 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.596224 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.596509 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.596457 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.596533 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.596848 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:16.596913 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:17.096611 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.096699 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.097003 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:17.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.596727 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.597060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.096680 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.096762 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.097085 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.596689 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.596752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.597053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:18.597096 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:19.096874 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.096949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.097291 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:19.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.096486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.596098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.596435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:21.096417 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.096492 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.096805 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:21.096849 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:21.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.596686 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.596951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.096805 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.096887 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.097262 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.596005 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.596438 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.096077 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.096377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:23.596497 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:24.096207 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.096291 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:24.595992 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.596065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.596365 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.096162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.096470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:25.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:26.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.096259 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:26.596641 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.596715 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.597033 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.096832 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.096920 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.097265 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:28.096050 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:28.096502 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:28.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.095982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.096048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.096296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:30.096147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.096226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.096531 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:30.096584 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:30.595996 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.096270 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.596652 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.596724 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.597083 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:32.096810 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.096900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.097155 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:32.097194 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:32.595938 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.596018 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.596399 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.096514 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.096448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.596176 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.596250 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.596573 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:34.596630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:35.096275 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.096626 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:35.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.596118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.595986 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.596055 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.596358 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:37.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:37.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:37.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.596167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.596423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:39.596461 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:40.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:40.596239 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.596317 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.596658 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.096446 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.096515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.096773 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.596719 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.596796 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.597144 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:41.597198 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:42.095939 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.096035 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:42.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.596396 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.596213 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.596294 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.596619 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:44.096156 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.096508 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:44.096557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.096206 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.096300 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.096704 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.596071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.596317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.096028 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.096449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.596566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:46.596617 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:47.095989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.096301 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:47.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.096107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:49.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.096455 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:49.096507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:49.596031 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.096219 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.596217 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.596292 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.596622 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:51.096688 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.096769 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.097192 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:51.097261 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:51.595908 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.595986 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.596268 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.095979 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.596273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.596678 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.096355 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.096427 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.596491 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:53.596544 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:54.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.096480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:54.596158 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.096042 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.596488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:56.096195 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.096268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:56.096624 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:56.596612 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.597015 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.096867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.097193 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.595906 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.595972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.596218 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:58.097085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.097172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.097567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:58.097619 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:58.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.596241 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.596588 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.096257 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.096325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.096568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.596240 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.596636 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.096222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.096550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.596394 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.596465 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.596730 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:00.596772 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:01.096774 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.096885 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.097366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:01.595973 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.597013 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.096796 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.096863 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.097153 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.596892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.596965 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:02.597411 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:03.095937 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.096024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.096410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:03.596093 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.596163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.596410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.596053 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.596474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:05.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.096107 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:05.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:05.596083 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.596157 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.096062 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.096137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.096505 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.596315 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.596381 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.596649 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:07.096321 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.096392 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:07.096788 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:07.596571 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.596648 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.596948 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.096706 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.096778 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.097061 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.596819 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.596900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.597237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.095966 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.096044 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.596095 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:09.596474 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:10.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:10.596067 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.596141 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.096744 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.096823 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.097172 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.595924 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.596004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.596347 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.096155 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:12.096542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:12.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.596092 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.096106 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:14.596542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:15.096232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.096651 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:15.596333 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.596407 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.596674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.096354 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.096430 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.096766 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.596753 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.596828 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.597170 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:16.597226 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:17.096933 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.097004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.097253 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:17.595951 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.596024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.095977 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.096058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.096393 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.596069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:19.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.096412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:19.096467 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:19.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.096172 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.096518 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.596013 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.596415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:21.096789 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.096865 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.097200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:21.097256 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:21.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.596143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.596421 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.096046 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.096346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:23.596435 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:24.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.096108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:24.596160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.596230 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.596159 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.596240 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.596576 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:25.596629 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:26.096303 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.096374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.096637 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:26.596570 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.596649 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.596995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.096704 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.596727 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.596794 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.597086 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:27.597135 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:28.096901 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.096979 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.097317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:28.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.096139 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.096211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.096453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:30.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.096532 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:30.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:30.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.596179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.096503 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.096583 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.596830 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.597245 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.095958 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.096033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.096334 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.596022 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:32.596482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:33.096166 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.096574 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:33.596105 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.096492 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.596212 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.596289 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.596632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:34.596687 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:35.096345 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.096429 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.096712 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:35.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.596142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.596460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.096237 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.596663 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.596739 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.597059 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:36.597108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:37.096872 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:37.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.596113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:39.096175 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:39.096631 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:39.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.096140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.096474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.596119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:41.096370 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.096433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:41.096759 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:41.596613 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.596690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.596999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.096724 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.096822 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.097289 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.596970 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.597040 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.597359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.096020 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.595988 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.596385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:43.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:44.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:44.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.096209 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.096309 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.096703 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.595999 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.596318 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:46.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:46.096587 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:46.596358 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.596433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.096505 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.096569 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.096863 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.597077 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:48.096892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.096972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:48.097376 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:48.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.096001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.596021 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.596390 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.096418 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.595985 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.596374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:50.596428 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:51.096388 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.096466 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.096827 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:51.596847 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.596923 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.597181 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.095961 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.096037 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.096403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.596120 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.596192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.596527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:52.596580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:53.096225 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.096307 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:53.596232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.596643 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.096242 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.096330 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.096701 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.596365 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.596434 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.596698 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:54.596749 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:55.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:55.596071 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.596147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.096173 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.596488 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.596560 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.596914 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:56.596966 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:57.096719 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.096795 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.097343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:57.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.596070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.096471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:59.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.096321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:59.096369 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:59.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.096235 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.096597 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.596299 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.596383 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.596831 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:01.096422 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.096517 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.096896 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:33:01.096952 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:33:01.596985 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.597111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.597684 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:02.096224 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:02.096288 1915516 node_ready.go:38] duration metric: took 6m0.000473787s for node "functional-837733" to be "Ready" ...
	I1222 00:33:02.099371 1915516 out.go:203] 
	W1222 00:33:02.102245 1915516 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:33:02.102270 1915516 out.go:285] * 
	W1222 00:33:02.104397 1915516 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:33:02.107416 1915516 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.215802098Z" level=info msg="Using the internal default seccomp profile"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.21580977Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.215814963Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.215825138Z" level=info msg="RDT not available in the host system"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.215839906Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.21663043Z" level=info msg="Conmon does support the --sync option"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.216657564Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.216672718Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.217304051Z" level=info msg="Conmon does support the --sync option"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.217387167Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.217509526Z" level=info msg="Updated default CNI network name to "
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.218199729Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.218658686Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.218721314Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282258133Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282418842Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282513739Z" level=info msg="Create NRI interface"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282663019Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282683572Z" level=info msg="runtime interface created"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282699145Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282705364Z" level=info msg="runtime interface starting up..."
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282711313Z" level=info msg="starting plugins..."
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282726615Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282801854Z" level=info msg="No systemd watchdog enabled"
	Dec 22 00:26:59 functional-837733 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:33:03.985996    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:03.986495    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:03.988139    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:03.988616    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:03.990179    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:33:04 up 1 day,  7:15,  0 user,  load average: 0.02, 0.21, 0.76
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:33:01 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:02 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 22 00:33:02 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:02 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:02 functional-837733 kubelet[8518]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:02 functional-837733 kubelet[8518]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:02 functional-837733 kubelet[8518]: E1222 00:33:02.177216    8518 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:02 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:02 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:02 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 22 00:33:02 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:02 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:02 functional-837733 kubelet[8524]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:02 functional-837733 kubelet[8524]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:02 functional-837733 kubelet[8524]: E1222 00:33:02.913589    8524 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:02 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:02 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:03 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 22 00:33:03 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:03 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:03 functional-837733 kubelet[8547]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:03 functional-837733 kubelet[8547]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:03 functional-837733 kubelet[8547]: E1222 00:33:03.650304    8547 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:03 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:03 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (340.694191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (368.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-837733 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-837733 get po -A: exit status 1 (88.940819ms)

                                                
                                                
** stderr ** 
	E1222 00:33:05.177015 1919544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:05.178587 1919544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:05.180074 1919544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:05.181578 1919544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:05.183024 1919544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-837733 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1222 00:33:05.177015 1919544 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1222 00:33:05.178587 1919544 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1222 00:33:05.180074 1919544 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1222 00:33:05.181578 1919544 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1222 00:33:05.183024 1919544 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-837733 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-837733 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (297.564348ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 logs -n 25: (1.026080015s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-885873 image rm kicbase/echo-server:functional-885873 --alsologtostderr                                                              │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                             │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image save --daemon kicbase/echo-server:functional-885873 --alsologtostderr                                                   │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/test/nested/copy/1881219/hosts                                                                              │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/1881219.pem                                                                                       │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /usr/share/ca-certificates/1881219.pem                                                                           │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                        │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/18812192.pem                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /usr/share/ca-certificates/18812192.pem                                                                          │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format short --alsologtostderr                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format yaml --alsologtostderr                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ ssh            │ functional-885873 ssh pgrep buildkitd                                                                                                           │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ image          │ functional-885873 image ls --format json --alsologtostderr                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format table --alsologtostderr                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr                                          │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ delete         │ -p functional-885873                                                                                                                            │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ start          │ -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ start          │ -p functional-837733 --alsologtostderr -v=8                                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:26 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:26:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:26:56.306253 1915516 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:26:56.306390 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306407 1915516 out.go:374] Setting ErrFile to fd 2...
	I1222 00:26:56.306413 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306666 1915516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:26:56.307062 1915516 out.go:368] Setting JSON to false
	I1222 00:26:56.307917 1915516 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112166,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:26:56.307982 1915516 start.go:143] virtualization:  
	I1222 00:26:56.313423 1915516 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:26:56.316413 1915516 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:26:56.316480 1915516 notify.go:221] Checking for updates...
	I1222 00:26:56.322104 1915516 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:26:56.324851 1915516 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:26:56.327697 1915516 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:26:56.330483 1915516 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:26:56.333306 1915516 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:26:56.336519 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:56.336644 1915516 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:26:56.361435 1915516 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:26:56.361557 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.415190 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.406057371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.415295 1915516 docker.go:319] overlay module found
	I1222 00:26:56.418305 1915516 out.go:179] * Using the docker driver based on existing profile
	I1222 00:26:56.421198 1915516 start.go:309] selected driver: docker
	I1222 00:26:56.421214 1915516 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.421302 1915516 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:26:56.421450 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.481024 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.471849294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.481474 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:56.481538 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:56.481593 1915516 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.484769 1915516 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:26:56.487560 1915516 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:26:56.490463 1915516 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:26:56.493542 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:56.493593 1915516 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:26:56.493605 1915516 cache.go:65] Caching tarball of preloaded images
	I1222 00:26:56.493623 1915516 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:26:56.493685 1915516 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:26:56.493695 1915516 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:26:56.493807 1915516 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:26:56.512598 1915516 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:26:56.512623 1915516 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:26:56.512639 1915516 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:26:56.512668 1915516 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:26:56.512725 1915516 start.go:364] duration metric: took 34.567µs to acquireMachinesLock for "functional-837733"
	I1222 00:26:56.512749 1915516 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:26:56.512754 1915516 fix.go:54] fixHost starting: 
	I1222 00:26:56.513015 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:26:56.528652 1915516 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:26:56.528683 1915516 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:26:56.531906 1915516 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:26:56.531939 1915516 machine.go:94] provisionDockerMachine start ...
	I1222 00:26:56.532017 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.548171 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.548493 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.548508 1915516 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:26:56.684687 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.684712 1915516 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:26:56.684788 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.701819 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.702143 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.702159 1915516 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:26:56.841689 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.841784 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.859107 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.859418 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.859440 1915516 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:26:56.989653 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:26:56.989683 1915516 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:26:56.989702 1915516 ubuntu.go:190] setting up certificates
	I1222 00:26:56.989718 1915516 provision.go:84] configureAuth start
	I1222 00:26:56.989779 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:57.009887 1915516 provision.go:143] copyHostCerts
	I1222 00:26:57.009938 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.009973 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:26:57.009985 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.010062 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:26:57.010160 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010187 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:26:57.010205 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010234 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:26:57.010285 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010306 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:26:57.010311 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010345 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:26:57.010415 1915516 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:26:57.201216 1915516 provision.go:177] copyRemoteCerts
	I1222 00:26:57.201293 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:26:57.201364 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.220369 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.317164 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:26:57.317296 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:26:57.334371 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:26:57.334443 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:26:57.355914 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:26:57.356016 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:26:57.374427 1915516 provision.go:87] duration metric: took 384.686066ms to configureAuth
	I1222 00:26:57.374456 1915516 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:26:57.374673 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:57.374803 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.396901 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:57.397214 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:57.397228 1915516 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:26:57.748387 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:26:57.748426 1915516 machine.go:97] duration metric: took 1.216463326s to provisionDockerMachine
	I1222 00:26:57.748438 1915516 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:26:57.748450 1915516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:26:57.748516 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:26:57.748562 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.764805 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.860955 1915516 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:26:57.863953 1915516 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:26:57.863971 1915516 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:26:57.863984 1915516 command_runner.go:130] > VERSION_ID="12"
	I1222 00:26:57.863988 1915516 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:26:57.863993 1915516 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:26:57.863997 1915516 command_runner.go:130] > ID=debian
	I1222 00:26:57.864002 1915516 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:26:57.864007 1915516 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:26:57.864014 1915516 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:26:57.864062 1915516 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:26:57.864078 1915516 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:26:57.864088 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:26:57.864138 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:26:57.864212 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:26:57.864218 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /etc/ssl/certs/18812192.pem
	I1222 00:26:57.864290 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:26:57.864295 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> /etc/test/nested/copy/1881219/hosts
	I1222 00:26:57.864343 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:26:57.871401 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:26:57.887952 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:26:57.904770 1915516 start.go:296] duration metric: took 156.318697ms for postStartSetup
	I1222 00:26:57.904856 1915516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:26:57.904900 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.924440 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.018813 1915516 command_runner.go:130] > 12%
	I1222 00:26:58.018908 1915516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:26:58.024077 1915516 command_runner.go:130] > 172G
	I1222 00:26:58.024155 1915516 fix.go:56] duration metric: took 1.511397372s for fixHost
	I1222 00:26:58.024173 1915516 start.go:83] releasing machines lock for "functional-837733", held for 1.511435197s
	I1222 00:26:58.024244 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:58.041457 1915516 ssh_runner.go:195] Run: cat /version.json
	I1222 00:26:58.041501 1915516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:26:58.041514 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.041557 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.062715 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.064581 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.169068 1915516 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:26:58.266136 1915516 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:26:58.269129 1915516 ssh_runner.go:195] Run: systemctl --version
	I1222 00:26:58.275371 1915516 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:26:58.275406 1915516 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:26:58.275466 1915516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:26:58.310186 1915516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:26:58.314515 1915516 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:26:58.314563 1915516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:26:58.314620 1915516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:26:58.322376 1915516 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:26:58.322401 1915516 start.go:496] detecting cgroup driver to use...
	I1222 00:26:58.322452 1915516 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:26:58.322524 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:26:58.337747 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:26:58.350732 1915516 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:26:58.350819 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:26:58.365932 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:26:58.378579 1915516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:26:58.492803 1915516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:26:58.604423 1915516 docker.go:234] disabling docker service ...
	I1222 00:26:58.604496 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:26:58.619028 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:26:58.631440 1915516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:26:58.747823 1915516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:26:58.866539 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:26:58.885381 1915516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:26:58.901152 1915516 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1222 00:26:58.902678 1915516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:26:58.902778 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.912460 1915516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:26:58.912567 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.924743 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.935404 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.944063 1915516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:26:58.951908 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.960893 1915516 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.969826 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.978699 1915516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:26:58.985309 1915516 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:26:58.986567 1915516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:26:58.993813 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.112143 1915516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:26:59.288646 1915516 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:26:59.288768 1915516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:26:59.292480 1915516 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1222 00:26:59.292507 1915516 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:26:59.292515 1915516 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1222 00:26:59.292522 1915516 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:26:59.292531 1915516 command_runner.go:130] > Access: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292537 1915516 command_runner.go:130] > Modify: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292542 1915516 command_runner.go:130] > Change: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292549 1915516 command_runner.go:130] >  Birth: -
	I1222 00:26:59.292591 1915516 start.go:564] Will wait 60s for crictl version
	I1222 00:26:59.292646 1915516 ssh_runner.go:195] Run: which crictl
	I1222 00:26:59.295996 1915516 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:26:59.296252 1915516 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:26:59.318308 1915516 command_runner.go:130] > Version:  0.1.0
	I1222 00:26:59.318367 1915516 command_runner.go:130] > RuntimeName:  cri-o
	I1222 00:26:59.318402 1915516 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1222 00:26:59.318421 1915516 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:26:59.320537 1915516 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:26:59.320652 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.347108 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.347172 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.347193 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.347237 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.347255 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.347273 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.347296 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.347319 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.347336 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.347352 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.347374 1915516 command_runner.go:130] >      static
	I1222 00:26:59.347390 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.347405 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.347421 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.347442 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.347466 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.347483 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.347500 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.347516 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.347534 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.350153 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.376069 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.376144 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.376164 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.376182 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.376201 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.376235 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.376261 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.376279 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.376305 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.376329 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.376347 1915516 command_runner.go:130] >      static
	I1222 00:26:59.376375 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.376391 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.376409 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.376433 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.376462 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.376479 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.376495 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.376513 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.376545 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.383960 1915516 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:26:59.386763 1915516 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:26:59.402614 1915516 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:26:59.406608 1915516 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:26:59.406693 1915516 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:26:59.406800 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:59.406861 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.440185 1915516 command_runner.go:130] > {
	I1222 00:26:59.440203 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.440208 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440216 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.440220 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440226 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.440230 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440234 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440243 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.440251 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.440254 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440259 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.440263 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440270 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440273 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440277 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440284 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.440288 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440301 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.440305 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440309 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440317 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.440326 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.440329 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440333 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.440337 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440344 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440347 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440350 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440357 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.440361 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440366 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.440369 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440373 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440381 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.440388 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.440392 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440396 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.440404 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.440408 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440411 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440414 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440421 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.440424 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440429 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.440433 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440437 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440445 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.440452 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.440455 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440459 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.440466 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440470 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440474 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440482 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440486 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440489 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440492 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440499 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.440502 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440509 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.440512 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440516 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440524 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.440532 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.440535 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440539 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.440542 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440546 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440549 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440553 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440557 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440560 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440563 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440569 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.440573 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440578 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.440582 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440586 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440594 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.440602 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.440606 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440610 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.440613 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440624 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440627 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440631 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440634 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440637 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440640 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440647 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.440651 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440656 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.440659 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440663 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440671 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.440679 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.440681 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440685 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.440689 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440693 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440697 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440700 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440706 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.440710 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440714 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.440718 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440721 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440729 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.440745 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.440748 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440752 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.440755 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440759 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440762 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440765 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440769 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440775 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440779 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440785 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.440789 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440793 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.440797 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440800 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440808 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.440816 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.440819 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440842 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.440846 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440850 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.440853 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440857 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440860 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.440863 1915516 command_runner.go:130] >     }
	I1222 00:26:59.440865 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.440868 1915516 command_runner.go:130] > }
	I1222 00:26:59.441031 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.441039 1915516 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:26:59.441099 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.463515 1915516 command_runner.go:130] > {
	I1222 00:26:59.463533 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.463538 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463547 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.463552 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463566 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.463570 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463574 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463582 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.463590 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.463594 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463598 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.463602 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463606 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463610 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463613 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463619 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.463624 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463629 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.463632 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463636 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463655 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.463664 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.463667 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463671 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.463675 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463680 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463683 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463686 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463693 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.463697 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463703 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.463706 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463710 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463718 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.463726 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.463730 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463734 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.463741 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.463745 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463748 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463752 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463758 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.463761 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463767 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.463771 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463775 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463782 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.463790 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.463793 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463797 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.463801 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463805 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463808 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463816 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463819 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463823 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463826 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463832 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.463836 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463841 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.463844 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463848 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463855 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.463863 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.463867 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463871 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.463875 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463878 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463882 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463885 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463893 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463896 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463899 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463905 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.463909 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463914 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.463917 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463921 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463929 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.463937 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.463940 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463944 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.463947 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463951 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463954 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463958 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463962 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463965 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463968 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463974 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.463979 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463984 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.463987 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463991 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463998 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.464006 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.464009 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464013 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.464017 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464021 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464024 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464027 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464033 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.464040 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464046 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.464050 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464053 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464061 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.464078 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.464081 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464085 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.464089 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464093 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.464096 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464100 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464103 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464106 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464109 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464115 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.464119 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464123 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.464126 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464130 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464137 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.464145 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.464148 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464151 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.464155 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464158 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.464161 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464165 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464168 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.464171 1915516 command_runner.go:130] >     }
	I1222 00:26:59.464174 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.464176 1915516 command_runner.go:130] > }
	I1222 00:26:59.467117 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.467182 1915516 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:26:59.467204 1915516 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:26:59.467349 1915516 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:26:59.467469 1915516 ssh_runner.go:195] Run: crio config
	I1222 00:26:59.517634 1915516 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1222 00:26:59.517672 1915516 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1222 00:26:59.517681 1915516 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1222 00:26:59.517684 1915516 command_runner.go:130] > #
	I1222 00:26:59.517692 1915516 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1222 00:26:59.517698 1915516 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1222 00:26:59.517704 1915516 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1222 00:26:59.517712 1915516 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1222 00:26:59.517716 1915516 command_runner.go:130] > # reload'.
	I1222 00:26:59.517723 1915516 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1222 00:26:59.517729 1915516 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1222 00:26:59.517735 1915516 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1222 00:26:59.517742 1915516 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1222 00:26:59.517745 1915516 command_runner.go:130] > [crio]
	I1222 00:26:59.517751 1915516 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1222 00:26:59.517756 1915516 command_runner.go:130] > # containers images, in this directory.
	I1222 00:26:59.517994 1915516 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1222 00:26:59.518006 1915516 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1222 00:26:59.518014 1915516 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1222 00:26:59.518022 1915516 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1222 00:26:59.518026 1915516 command_runner.go:130] > # imagestore = ""
	I1222 00:26:59.518032 1915516 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1222 00:26:59.518043 1915516 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1222 00:26:59.518048 1915516 command_runner.go:130] > # storage_driver = "overlay"
	I1222 00:26:59.518053 1915516 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1222 00:26:59.518059 1915516 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1222 00:26:59.518209 1915516 command_runner.go:130] > # storage_option = [
	I1222 00:26:59.518216 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.518229 1915516 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1222 00:26:59.518237 1915516 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1222 00:26:59.518375 1915516 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1222 00:26:59.518385 1915516 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1222 00:26:59.518392 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1222 00:26:59.518406 1915516 command_runner.go:130] > # always happen on a node reboot
	I1222 00:26:59.518412 1915516 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1222 00:26:59.518638 1915516 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1222 00:26:59.518654 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1222 00:26:59.518660 1915516 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1222 00:26:59.521919 1915516 command_runner.go:130] > # version_file_persist = ""
	I1222 00:26:59.521945 1915516 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1222 00:26:59.521986 1915516 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1222 00:26:59.521998 1915516 command_runner.go:130] > # internal_wipe = true
	I1222 00:26:59.522012 1915516 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1222 00:26:59.522023 1915516 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1222 00:26:59.522028 1915516 command_runner.go:130] > # internal_repair = true
	I1222 00:26:59.522049 1915516 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1222 00:26:59.522075 1915516 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1222 00:26:59.522087 1915516 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1222 00:26:59.522103 1915516 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1222 00:26:59.522130 1915516 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1222 00:26:59.522140 1915516 command_runner.go:130] > [crio.api]
	I1222 00:26:59.522156 1915516 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1222 00:26:59.522172 1915516 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1222 00:26:59.522199 1915516 command_runner.go:130] > # IP address on which the stream server will listen.
	I1222 00:26:59.522210 1915516 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1222 00:26:59.522227 1915516 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1222 00:26:59.522243 1915516 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1222 00:26:59.522251 1915516 command_runner.go:130] > # stream_port = "0"
	I1222 00:26:59.522257 1915516 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1222 00:26:59.522262 1915516 command_runner.go:130] > # stream_enable_tls = false
	I1222 00:26:59.522297 1915516 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1222 00:26:59.522308 1915516 command_runner.go:130] > # stream_idle_timeout = ""
	I1222 00:26:59.522315 1915516 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1222 00:26:59.522330 1915516 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522334 1915516 command_runner.go:130] > # stream_tls_cert = ""
	I1222 00:26:59.522352 1915516 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1222 00:26:59.522421 1915516 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522433 1915516 command_runner.go:130] > # stream_tls_key = ""
	I1222 00:26:59.522440 1915516 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1222 00:26:59.522455 1915516 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1222 00:26:59.522468 1915516 command_runner.go:130] > # automatically pick up the changes.
	I1222 00:26:59.522477 1915516 command_runner.go:130] > # stream_tls_ca = ""
	I1222 00:26:59.522505 1915516 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522519 1915516 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1222 00:26:59.522527 1915516 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522540 1915516 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1222 00:26:59.522547 1915516 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1222 00:26:59.522556 1915516 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1222 00:26:59.522560 1915516 command_runner.go:130] > [crio.runtime]
	I1222 00:26:59.522574 1915516 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1222 00:26:59.522584 1915516 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1222 00:26:59.522588 1915516 command_runner.go:130] > # "nofile=1024:2048"
	I1222 00:26:59.522599 1915516 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1222 00:26:59.522607 1915516 command_runner.go:130] > # default_ulimits = [
	I1222 00:26:59.522611 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522624 1915516 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1222 00:26:59.522631 1915516 command_runner.go:130] > # no_pivot = false
	I1222 00:26:59.522642 1915516 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1222 00:26:59.522652 1915516 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1222 00:26:59.522657 1915516 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1222 00:26:59.522672 1915516 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1222 00:26:59.522681 1915516 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1222 00:26:59.522689 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522700 1915516 command_runner.go:130] > # conmon = ""
	I1222 00:26:59.522705 1915516 command_runner.go:130] > # Cgroup setting for conmon
	I1222 00:26:59.522717 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1222 00:26:59.522721 1915516 command_runner.go:130] > conmon_cgroup = "pod"
	I1222 00:26:59.522733 1915516 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1222 00:26:59.522741 1915516 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1222 00:26:59.522760 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522767 1915516 command_runner.go:130] > # conmon_env = [
	I1222 00:26:59.522770 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522776 1915516 command_runner.go:130] > # Additional environment variables to set for all the
	I1222 00:26:59.522786 1915516 command_runner.go:130] > # containers. These are overridden if set in the
	I1222 00:26:59.522796 1915516 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1222 00:26:59.522803 1915516 command_runner.go:130] > # default_env = [
	I1222 00:26:59.522807 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522813 1915516 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1222 00:26:59.522829 1915516 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1222 00:26:59.522839 1915516 command_runner.go:130] > # selinux = false
	I1222 00:26:59.522846 1915516 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1222 00:26:59.522861 1915516 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1222 00:26:59.522870 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522874 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.522886 1915516 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1222 00:26:59.522895 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522900 1915516 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1222 00:26:59.522911 1915516 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1222 00:26:59.522926 1915516 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1222 00:26:59.522936 1915516 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1222 00:26:59.522948 1915516 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1222 00:26:59.522956 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522962 1915516 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1222 00:26:59.522976 1915516 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1222 00:26:59.522984 1915516 command_runner.go:130] > # the cgroup blockio controller.
	I1222 00:26:59.522992 1915516 command_runner.go:130] > # blockio_config_file = ""
	I1222 00:26:59.522999 1915516 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1222 00:26:59.523010 1915516 command_runner.go:130] > # blockio parameters.
	I1222 00:26:59.523015 1915516 command_runner.go:130] > # blockio_reload = false
	I1222 00:26:59.523027 1915516 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1222 00:26:59.523033 1915516 command_runner.go:130] > # irqbalance daemon.
	I1222 00:26:59.523049 1915516 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1222 00:26:59.523059 1915516 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1222 00:26:59.523067 1915516 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1222 00:26:59.523081 1915516 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1222 00:26:59.523091 1915516 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1222 00:26:59.523103 1915516 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1222 00:26:59.523112 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523116 1915516 command_runner.go:130] > # rdt_config_file = ""
	I1222 00:26:59.523122 1915516 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1222 00:26:59.523133 1915516 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1222 00:26:59.523140 1915516 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1222 00:26:59.523148 1915516 command_runner.go:130] > # separate_pull_cgroup = ""
	I1222 00:26:59.523159 1915516 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1222 00:26:59.523202 1915516 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1222 00:26:59.523210 1915516 command_runner.go:130] > # will be added.
	I1222 00:26:59.523215 1915516 command_runner.go:130] > # default_capabilities = [
	I1222 00:26:59.523219 1915516 command_runner.go:130] > # 	"CHOWN",
	I1222 00:26:59.523223 1915516 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1222 00:26:59.523380 1915516 command_runner.go:130] > # 	"FSETID",
	I1222 00:26:59.523391 1915516 command_runner.go:130] > # 	"FOWNER",
	I1222 00:26:59.523395 1915516 command_runner.go:130] > # 	"SETGID",
	I1222 00:26:59.523399 1915516 command_runner.go:130] > # 	"SETUID",
	I1222 00:26:59.523427 1915516 command_runner.go:130] > # 	"SETPCAP",
	I1222 00:26:59.523432 1915516 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1222 00:26:59.523436 1915516 command_runner.go:130] > # 	"KILL",
	I1222 00:26:59.523439 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523448 1915516 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1222 00:26:59.523464 1915516 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1222 00:26:59.523470 1915516 command_runner.go:130] > # add_inheritable_capabilities = false
	I1222 00:26:59.523476 1915516 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1222 00:26:59.523482 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523486 1915516 command_runner.go:130] > default_sysctls = [
	I1222 00:26:59.523491 1915516 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1222 00:26:59.523494 1915516 command_runner.go:130] > ]
	I1222 00:26:59.523499 1915516 command_runner.go:130] > # List of devices on the host that a
	I1222 00:26:59.523506 1915516 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1222 00:26:59.523509 1915516 command_runner.go:130] > # allowed_devices = [
	I1222 00:26:59.523513 1915516 command_runner.go:130] > # 	"/dev/fuse",
	I1222 00:26:59.523517 1915516 command_runner.go:130] > # 	"/dev/net/tun",
	I1222 00:26:59.523520 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523525 1915516 command_runner.go:130] > # List of additional devices. specified as
	I1222 00:26:59.523533 1915516 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1222 00:26:59.523538 1915516 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1222 00:26:59.523544 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523548 1915516 command_runner.go:130] > # additional_devices = [
	I1222 00:26:59.523551 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523557 1915516 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1222 00:26:59.523561 1915516 command_runner.go:130] > # cdi_spec_dirs = [
	I1222 00:26:59.523565 1915516 command_runner.go:130] > # 	"/etc/cdi",
	I1222 00:26:59.523569 1915516 command_runner.go:130] > # 	"/var/run/cdi",
	I1222 00:26:59.523574 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523581 1915516 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1222 00:26:59.523588 1915516 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1222 00:26:59.523591 1915516 command_runner.go:130] > # Defaults to false.
	I1222 00:26:59.523596 1915516 command_runner.go:130] > # device_ownership_from_security_context = false
	I1222 00:26:59.523603 1915516 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1222 00:26:59.523609 1915516 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1222 00:26:59.523612 1915516 command_runner.go:130] > # hooks_dir = [
	I1222 00:26:59.523617 1915516 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1222 00:26:59.523622 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523628 1915516 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1222 00:26:59.523635 1915516 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1222 00:26:59.523642 1915516 command_runner.go:130] > # its default mounts from the following two files:
	I1222 00:26:59.523644 1915516 command_runner.go:130] > #
	I1222 00:26:59.523651 1915516 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1222 00:26:59.523658 1915516 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1222 00:26:59.523663 1915516 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1222 00:26:59.523666 1915516 command_runner.go:130] > #
	I1222 00:26:59.523672 1915516 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1222 00:26:59.523679 1915516 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1222 00:26:59.523686 1915516 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1222 00:26:59.523692 1915516 command_runner.go:130] > #      only add mounts it finds in this file.
	I1222 00:26:59.523695 1915516 command_runner.go:130] > #
	I1222 00:26:59.523700 1915516 command_runner.go:130] > # default_mounts_file = ""
	I1222 00:26:59.523706 1915516 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1222 00:26:59.523713 1915516 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1222 00:26:59.523716 1915516 command_runner.go:130] > # pids_limit = -1
	I1222 00:26:59.523723 1915516 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1222 00:26:59.523729 1915516 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1222 00:26:59.523735 1915516 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1222 00:26:59.523744 1915516 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1222 00:26:59.523748 1915516 command_runner.go:130] > # log_size_max = -1
	I1222 00:26:59.523755 1915516 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1222 00:26:59.523759 1915516 command_runner.go:130] > # log_to_journald = false
	I1222 00:26:59.523765 1915516 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1222 00:26:59.523770 1915516 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1222 00:26:59.523775 1915516 command_runner.go:130] > # Path to directory for container attach sockets.
	I1222 00:26:59.523779 1915516 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1222 00:26:59.523784 1915516 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1222 00:26:59.523788 1915516 command_runner.go:130] > # bind_mount_prefix = ""
	I1222 00:26:59.523794 1915516 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1222 00:26:59.523798 1915516 command_runner.go:130] > # read_only = false
	I1222 00:26:59.523804 1915516 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1222 00:26:59.523810 1915516 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1222 00:26:59.523815 1915516 command_runner.go:130] > # live configuration reload.
	I1222 00:26:59.523818 1915516 command_runner.go:130] > # log_level = "info"
	I1222 00:26:59.523825 1915516 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1222 00:26:59.523830 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523834 1915516 command_runner.go:130] > # log_filter = ""
	I1222 00:26:59.523841 1915516 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523847 1915516 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1222 00:26:59.523850 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523858 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523862 1915516 command_runner.go:130] > # uid_mappings = ""
	I1222 00:26:59.523868 1915516 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523874 1915516 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1222 00:26:59.523879 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523887 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523891 1915516 command_runner.go:130] > # gid_mappings = ""
	I1222 00:26:59.523897 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1222 00:26:59.523903 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523909 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523917 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523921 1915516 command_runner.go:130] > # minimum_mappable_uid = -1
	I1222 00:26:59.523928 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1222 00:26:59.523934 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523940 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523948 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523952 1915516 command_runner.go:130] > # minimum_mappable_gid = -1
	I1222 00:26:59.523958 1915516 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1222 00:26:59.523964 1915516 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1222 00:26:59.523970 1915516 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1222 00:26:59.523975 1915516 command_runner.go:130] > # ctr_stop_timeout = 30
	I1222 00:26:59.523981 1915516 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1222 00:26:59.523987 1915516 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1222 00:26:59.523992 1915516 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1222 00:26:59.523997 1915516 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1222 00:26:59.524001 1915516 command_runner.go:130] > # drop_infra_ctr = true
	I1222 00:26:59.524011 1915516 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1222 00:26:59.524017 1915516 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1222 00:26:59.524025 1915516 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1222 00:26:59.524033 1915516 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1222 00:26:59.524041 1915516 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1222 00:26:59.524048 1915516 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1222 00:26:59.524053 1915516 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1222 00:26:59.524059 1915516 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1222 00:26:59.524063 1915516 command_runner.go:130] > # shared_cpuset = ""
	I1222 00:26:59.524068 1915516 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1222 00:26:59.524074 1915516 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1222 00:26:59.524078 1915516 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1222 00:26:59.524086 1915516 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1222 00:26:59.524090 1915516 command_runner.go:130] > # pinns_path = ""
	I1222 00:26:59.524095 1915516 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1222 00:26:59.524101 1915516 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1222 00:26:59.524106 1915516 command_runner.go:130] > # enable_criu_support = true
	I1222 00:26:59.524111 1915516 command_runner.go:130] > # Enable/disable the generation of the container,
	I1222 00:26:59.524117 1915516 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1222 00:26:59.524121 1915516 command_runner.go:130] > # enable_pod_events = false
	I1222 00:26:59.524128 1915516 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1222 00:26:59.524133 1915516 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1222 00:26:59.524137 1915516 command_runner.go:130] > # default_runtime = "crun"
	I1222 00:26:59.524142 1915516 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1222 00:26:59.524150 1915516 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1222 00:26:59.524159 1915516 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1222 00:26:59.524164 1915516 command_runner.go:130] > # creation as a file is not desired either.
	I1222 00:26:59.524173 1915516 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1222 00:26:59.524178 1915516 command_runner.go:130] > # the hostname is being managed dynamically.
	I1222 00:26:59.524182 1915516 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1222 00:26:59.524185 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.524191 1915516 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1222 00:26:59.524198 1915516 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1222 00:26:59.524204 1915516 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1222 00:26:59.524209 1915516 command_runner.go:130] > # Each entry in the table should follow the format:
	I1222 00:26:59.524212 1915516 command_runner.go:130] > #
	I1222 00:26:59.524217 1915516 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1222 00:26:59.524221 1915516 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1222 00:26:59.524226 1915516 command_runner.go:130] > # runtime_type = "oci"
	I1222 00:26:59.524230 1915516 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1222 00:26:59.524236 1915516 command_runner.go:130] > # inherit_default_runtime = false
	I1222 00:26:59.524246 1915516 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1222 00:26:59.524250 1915516 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1222 00:26:59.524255 1915516 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1222 00:26:59.524259 1915516 command_runner.go:130] > # monitor_env = []
	I1222 00:26:59.524264 1915516 command_runner.go:130] > # privileged_without_host_devices = false
	I1222 00:26:59.524268 1915516 command_runner.go:130] > # allowed_annotations = []
	I1222 00:26:59.524274 1915516 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1222 00:26:59.524277 1915516 command_runner.go:130] > # no_sync_log = false
	I1222 00:26:59.524281 1915516 command_runner.go:130] > # default_annotations = {}
	I1222 00:26:59.524285 1915516 command_runner.go:130] > # stream_websockets = false
	I1222 00:26:59.524289 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.524337 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.524343 1915516 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1222 00:26:59.524349 1915516 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1222 00:26:59.524356 1915516 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1222 00:26:59.524362 1915516 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1222 00:26:59.524365 1915516 command_runner.go:130] > #   in $PATH.
	I1222 00:26:59.524371 1915516 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1222 00:26:59.524376 1915516 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1222 00:26:59.524382 1915516 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1222 00:26:59.524386 1915516 command_runner.go:130] > #   state.
	I1222 00:26:59.524392 1915516 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1222 00:26:59.524399 1915516 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1222 00:26:59.524405 1915516 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1222 00:26:59.524411 1915516 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1222 00:26:59.524417 1915516 command_runner.go:130] > #   the values from the default runtime on load time.
	I1222 00:26:59.524423 1915516 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1222 00:26:59.524429 1915516 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1222 00:26:59.524435 1915516 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1222 00:26:59.524441 1915516 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1222 00:26:59.524446 1915516 command_runner.go:130] > #   The currently recognized values are:
	I1222 00:26:59.524453 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1222 00:26:59.524460 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1222 00:26:59.524466 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1222 00:26:59.524472 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1222 00:26:59.524479 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1222 00:26:59.524489 1915516 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1222 00:26:59.524496 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1222 00:26:59.524502 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1222 00:26:59.524508 1915516 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1222 00:26:59.524514 1915516 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1222 00:26:59.524521 1915516 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1222 00:26:59.524528 1915516 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1222 00:26:59.524534 1915516 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1222 00:26:59.524540 1915516 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1222 00:26:59.524546 1915516 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1222 00:26:59.524554 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1222 00:26:59.524560 1915516 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1222 00:26:59.524564 1915516 command_runner.go:130] > #   deprecated option "conmon".
	I1222 00:26:59.524571 1915516 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1222 00:26:59.524576 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1222 00:26:59.524583 1915516 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1222 00:26:59.524587 1915516 command_runner.go:130] > #   should be moved to the container's cgroup
	I1222 00:26:59.524594 1915516 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1222 00:26:59.524599 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1222 00:26:59.524606 1915516 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1222 00:26:59.524610 1915516 command_runner.go:130] > #   conmon-rs by using:
	I1222 00:26:59.524617 1915516 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1222 00:26:59.524625 1915516 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1222 00:26:59.524633 1915516 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1222 00:26:59.524640 1915516 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1222 00:26:59.524645 1915516 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1222 00:26:59.524652 1915516 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1222 00:26:59.524660 1915516 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1222 00:26:59.524665 1915516 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1222 00:26:59.524673 1915516 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1222 00:26:59.524683 1915516 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1222 00:26:59.524688 1915516 command_runner.go:130] > #   when a machine crash happens.
	I1222 00:26:59.524695 1915516 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1222 00:26:59.524703 1915516 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1222 00:26:59.524711 1915516 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1222 00:26:59.524715 1915516 command_runner.go:130] > #   seccomp profile for the runtime.
	I1222 00:26:59.524721 1915516 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1222 00:26:59.524729 1915516 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1222 00:26:59.524732 1915516 command_runner.go:130] > #
	I1222 00:26:59.524737 1915516 command_runner.go:130] > # Using the seccomp notifier feature:
	I1222 00:26:59.524739 1915516 command_runner.go:130] > #
	I1222 00:26:59.524745 1915516 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1222 00:26:59.524752 1915516 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1222 00:26:59.524755 1915516 command_runner.go:130] > #
	I1222 00:26:59.524761 1915516 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1222 00:26:59.524768 1915516 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1222 00:26:59.524770 1915516 command_runner.go:130] > #
	I1222 00:26:59.524776 1915516 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1222 00:26:59.524780 1915516 command_runner.go:130] > # feature.
	I1222 00:26:59.524783 1915516 command_runner.go:130] > #
	I1222 00:26:59.524789 1915516 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1222 00:26:59.524795 1915516 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1222 00:26:59.524801 1915516 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1222 00:26:59.524807 1915516 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1222 00:26:59.524813 1915516 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1222 00:26:59.524815 1915516 command_runner.go:130] > #
	I1222 00:26:59.524834 1915516 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1222 00:26:59.524841 1915516 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1222 00:26:59.524844 1915516 command_runner.go:130] > #
	I1222 00:26:59.524855 1915516 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1222 00:26:59.524861 1915516 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1222 00:26:59.524865 1915516 command_runner.go:130] > #
	I1222 00:26:59.524871 1915516 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1222 00:26:59.524877 1915516 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1222 00:26:59.524880 1915516 command_runner.go:130] > # limitation.
	I1222 00:26:59.524884 1915516 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1222 00:26:59.524890 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1222 00:26:59.524894 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524900 1915516 command_runner.go:130] > runtime_root = "/run/crun"
	I1222 00:26:59.524904 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524908 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524912 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524917 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524921 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524924 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524931 1915516 command_runner.go:130] > allowed_annotations = [
	I1222 00:26:59.524936 1915516 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1222 00:26:59.524939 1915516 command_runner.go:130] > ]
	I1222 00:26:59.524944 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524948 1915516 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1222 00:26:59.524952 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1222 00:26:59.524955 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524959 1915516 command_runner.go:130] > runtime_root = "/run/runc"
	I1222 00:26:59.524964 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524967 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524972 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524976 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524980 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524984 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524988 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524995 1915516 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1222 00:26:59.525000 1915516 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1222 00:26:59.525007 1915516 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1222 00:26:59.525015 1915516 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1222 00:26:59.525027 1915516 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1222 00:26:59.525037 1915516 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1222 00:26:59.525044 1915516 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1222 00:26:59.525050 1915516 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1222 00:26:59.525059 1915516 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1222 00:26:59.525068 1915516 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1222 00:26:59.525074 1915516 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1222 00:26:59.525081 1915516 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1222 00:26:59.525085 1915516 command_runner.go:130] > # Example:
	I1222 00:26:59.525089 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1222 00:26:59.525095 1915516 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1222 00:26:59.525100 1915516 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1222 00:26:59.525108 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1222 00:26:59.525111 1915516 command_runner.go:130] > # cpuset = "0-1"
	I1222 00:26:59.525115 1915516 command_runner.go:130] > # cpushares = "5"
	I1222 00:26:59.525119 1915516 command_runner.go:130] > # cpuquota = "1000"
	I1222 00:26:59.525122 1915516 command_runner.go:130] > # cpuperiod = "100000"
	I1222 00:26:59.525126 1915516 command_runner.go:130] > # cpulimit = "35"
	I1222 00:26:59.525129 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.525133 1915516 command_runner.go:130] > # The workload name is workload-type.
	I1222 00:26:59.525141 1915516 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1222 00:26:59.525147 1915516 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1222 00:26:59.525153 1915516 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1222 00:26:59.525161 1915516 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1222 00:26:59.525166 1915516 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1222 00:26:59.525171 1915516 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1222 00:26:59.525178 1915516 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1222 00:26:59.525182 1915516 command_runner.go:130] > # Default value is set to true
	I1222 00:26:59.525187 1915516 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1222 00:26:59.525192 1915516 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1222 00:26:59.525197 1915516 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1222 00:26:59.525201 1915516 command_runner.go:130] > # Default value is set to 'false'
	I1222 00:26:59.525205 1915516 command_runner.go:130] > # disable_hostport_mapping = false
	I1222 00:26:59.525211 1915516 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1222 00:26:59.525219 1915516 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1222 00:26:59.525222 1915516 command_runner.go:130] > # timezone = ""
	I1222 00:26:59.525229 1915516 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1222 00:26:59.525231 1915516 command_runner.go:130] > #
	I1222 00:26:59.525238 1915516 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1222 00:26:59.525244 1915516 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1222 00:26:59.525248 1915516 command_runner.go:130] > [crio.image]
	I1222 00:26:59.525254 1915516 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1222 00:26:59.525258 1915516 command_runner.go:130] > # default_transport = "docker://"
	I1222 00:26:59.525363 1915516 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1222 00:26:59.525377 1915516 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525381 1915516 command_runner.go:130] > # global_auth_file = ""
	I1222 00:26:59.525392 1915516 command_runner.go:130] > # The image used to instantiate infra containers.
	I1222 00:26:59.525398 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525403 1915516 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.525414 1915516 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1222 00:26:59.525425 1915516 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525432 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525436 1915516 command_runner.go:130] > # pause_image_auth_file = ""
	I1222 00:26:59.525442 1915516 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1222 00:26:59.525450 1915516 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1222 00:26:59.525460 1915516 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1222 00:26:59.525466 1915516 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1222 00:26:59.525474 1915516 command_runner.go:130] > # pause_command = "/pause"
	I1222 00:26:59.525480 1915516 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1222 00:26:59.525491 1915516 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1222 00:26:59.525499 1915516 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1222 00:26:59.525509 1915516 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1222 00:26:59.525515 1915516 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1222 00:26:59.525521 1915516 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1222 00:26:59.525527 1915516 command_runner.go:130] > # pinned_images = [
	I1222 00:26:59.525530 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525536 1915516 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1222 00:26:59.525545 1915516 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1222 00:26:59.525557 1915516 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1222 00:26:59.525563 1915516 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1222 00:26:59.525572 1915516 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1222 00:26:59.525578 1915516 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1222 00:26:59.525584 1915516 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1222 00:26:59.525594 1915516 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1222 00:26:59.525601 1915516 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1222 00:26:59.525608 1915516 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1222 00:26:59.525616 1915516 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1222 00:26:59.525621 1915516 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1222 00:26:59.525627 1915516 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1222 00:26:59.525634 1915516 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1222 00:26:59.525642 1915516 command_runner.go:130] > # changing them here.
	I1222 00:26:59.525648 1915516 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1222 00:26:59.525652 1915516 command_runner.go:130] > # insecure_registries = [
	I1222 00:26:59.525659 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525666 1915516 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1222 00:26:59.525676 1915516 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1222 00:26:59.525684 1915516 command_runner.go:130] > # image_volumes = "mkdir"
	I1222 00:26:59.525692 1915516 command_runner.go:130] > # Temporary directory to use for storing big files
	I1222 00:26:59.525697 1915516 command_runner.go:130] > # big_files_temporary_dir = ""
	I1222 00:26:59.525703 1915516 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1222 00:26:59.525711 1915516 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1222 00:26:59.525717 1915516 command_runner.go:130] > # auto_reload_registries = false
	I1222 00:26:59.525724 1915516 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1222 00:26:59.525734 1915516 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1222 00:26:59.525744 1915516 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1222 00:26:59.525748 1915516 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1222 00:26:59.525770 1915516 command_runner.go:130] > # The mode of short name resolution.
	I1222 00:26:59.525780 1915516 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1222 00:26:59.525788 1915516 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1222 00:26:59.525794 1915516 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1222 00:26:59.525800 1915516 command_runner.go:130] > # short_name_mode = "enforcing"
	I1222 00:26:59.525807 1915516 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1222 00:26:59.525813 1915516 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1222 00:26:59.525821 1915516 command_runner.go:130] > # oci_artifact_mount_support = true
	I1222 00:26:59.525829 1915516 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1222 00:26:59.525836 1915516 command_runner.go:130] > # CNI plugins.
	I1222 00:26:59.525843 1915516 command_runner.go:130] > [crio.network]
	I1222 00:26:59.525850 1915516 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1222 00:26:59.525859 1915516 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1222 00:26:59.525863 1915516 command_runner.go:130] > # cni_default_network = ""
	I1222 00:26:59.525869 1915516 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1222 00:26:59.525877 1915516 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1222 00:26:59.525883 1915516 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1222 00:26:59.525887 1915516 command_runner.go:130] > # plugin_dirs = [
	I1222 00:26:59.525891 1915516 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1222 00:26:59.525897 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525903 1915516 command_runner.go:130] > # List of included pod metrics.
	I1222 00:26:59.525908 1915516 command_runner.go:130] > # included_pod_metrics = [
	I1222 00:26:59.525913 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525920 1915516 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1222 00:26:59.525926 1915516 command_runner.go:130] > [crio.metrics]
	I1222 00:26:59.525932 1915516 command_runner.go:130] > # Globally enable or disable metrics support.
	I1222 00:26:59.525936 1915516 command_runner.go:130] > # enable_metrics = false
	I1222 00:26:59.525945 1915516 command_runner.go:130] > # Specify enabled metrics collectors.
	I1222 00:26:59.525950 1915516 command_runner.go:130] > # Per default all metrics are enabled.
	I1222 00:26:59.525957 1915516 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1222 00:26:59.525969 1915516 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1222 00:26:59.525976 1915516 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1222 00:26:59.525981 1915516 command_runner.go:130] > # metrics_collectors = [
	I1222 00:26:59.525987 1915516 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1222 00:26:59.525992 1915516 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1222 00:26:59.525998 1915516 command_runner.go:130] > # 	"containers_oom_total",
	I1222 00:26:59.526006 1915516 command_runner.go:130] > # 	"processes_defunct",
	I1222 00:26:59.526010 1915516 command_runner.go:130] > # 	"operations_total",
	I1222 00:26:59.526014 1915516 command_runner.go:130] > # 	"operations_latency_seconds",
	I1222 00:26:59.526019 1915516 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1222 00:26:59.526023 1915516 command_runner.go:130] > # 	"operations_errors_total",
	I1222 00:26:59.526028 1915516 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1222 00:26:59.526033 1915516 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1222 00:26:59.526037 1915516 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1222 00:26:59.526041 1915516 command_runner.go:130] > # 	"image_pulls_success_total",
	I1222 00:26:59.526045 1915516 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1222 00:26:59.526050 1915516 command_runner.go:130] > # 	"containers_oom_count_total",
	I1222 00:26:59.526055 1915516 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1222 00:26:59.526059 1915516 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1222 00:26:59.526064 1915516 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1222 00:26:59.526066 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526080 1915516 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1222 00:26:59.526084 1915516 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1222 00:26:59.526089 1915516 command_runner.go:130] > # The port on which the metrics server will listen.
	I1222 00:26:59.526093 1915516 command_runner.go:130] > # metrics_port = 9090
	I1222 00:26:59.526098 1915516 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1222 00:26:59.526102 1915516 command_runner.go:130] > # metrics_socket = ""
	I1222 00:26:59.526107 1915516 command_runner.go:130] > # The certificate for the secure metrics server.
	I1222 00:26:59.526118 1915516 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1222 00:26:59.526125 1915516 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1222 00:26:59.526130 1915516 command_runner.go:130] > # certificate on any modification event.
	I1222 00:26:59.526134 1915516 command_runner.go:130] > # metrics_cert = ""
	I1222 00:26:59.526139 1915516 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1222 00:26:59.526148 1915516 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1222 00:26:59.526151 1915516 command_runner.go:130] > # metrics_key = ""
	I1222 00:26:59.526157 1915516 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1222 00:26:59.526163 1915516 command_runner.go:130] > [crio.tracing]
	I1222 00:26:59.526169 1915516 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1222 00:26:59.526177 1915516 command_runner.go:130] > # enable_tracing = false
	I1222 00:26:59.526182 1915516 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1222 00:26:59.526187 1915516 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1222 00:26:59.526197 1915516 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1222 00:26:59.526205 1915516 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1222 00:26:59.526213 1915516 command_runner.go:130] > # CRI-O NRI configuration.
	I1222 00:26:59.526218 1915516 command_runner.go:130] > [crio.nri]
	I1222 00:26:59.526222 1915516 command_runner.go:130] > # Globally enable or disable NRI.
	I1222 00:26:59.526226 1915516 command_runner.go:130] > # enable_nri = true
	I1222 00:26:59.526230 1915516 command_runner.go:130] > # NRI socket to listen on.
	I1222 00:26:59.526238 1915516 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1222 00:26:59.526242 1915516 command_runner.go:130] > # NRI plugin directory to use.
	I1222 00:26:59.526247 1915516 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1222 00:26:59.526255 1915516 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1222 00:26:59.526264 1915516 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1222 00:26:59.526270 1915516 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1222 00:26:59.526331 1915516 command_runner.go:130] > # nri_disable_connections = false
	I1222 00:26:59.526342 1915516 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1222 00:26:59.526347 1915516 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1222 00:26:59.526352 1915516 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1222 00:26:59.526359 1915516 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1222 00:26:59.526364 1915516 command_runner.go:130] > # NRI default validator configuration.
	I1222 00:26:59.526374 1915516 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1222 00:26:59.526381 1915516 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1222 00:26:59.526389 1915516 command_runner.go:130] > # can be restricted/rejected:
	I1222 00:26:59.526393 1915516 command_runner.go:130] > # - OCI hook injection
	I1222 00:26:59.526399 1915516 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1222 00:26:59.526406 1915516 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1222 00:26:59.526411 1915516 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1222 00:26:59.526416 1915516 command_runner.go:130] > # - adjustment of linux namespaces
	I1222 00:26:59.526425 1915516 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1222 00:26:59.526435 1915516 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1222 00:26:59.526442 1915516 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1222 00:26:59.526448 1915516 command_runner.go:130] > #
	I1222 00:26:59.526452 1915516 command_runner.go:130] > # [crio.nri.default_validator]
	I1222 00:26:59.526457 1915516 command_runner.go:130] > # nri_enable_default_validator = false
	I1222 00:26:59.526467 1915516 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1222 00:26:59.526474 1915516 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1222 00:26:59.526485 1915516 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1222 00:26:59.526492 1915516 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1222 00:26:59.526500 1915516 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1222 00:26:59.526505 1915516 command_runner.go:130] > # nri_validator_required_plugins = [
	I1222 00:26:59.526510 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526516 1915516 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1222 00:26:59.526525 1915516 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1222 00:26:59.526530 1915516 command_runner.go:130] > [crio.stats]
	I1222 00:26:59.526543 1915516 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1222 00:26:59.526549 1915516 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1222 00:26:59.526556 1915516 command_runner.go:130] > # stats_collection_period = 0
	I1222 00:26:59.526563 1915516 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1222 00:26:59.526573 1915516 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1222 00:26:59.526577 1915516 command_runner.go:130] > # collection_period = 0
	I1222 00:26:59.526772 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.4923315Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1222 00:26:59.526793 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492368561Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1222 00:26:59.526804 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492394456Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1222 00:26:59.526811 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492418381Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1222 00:26:59.526829 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.49249673Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:59.526841 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492972301Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1222 00:26:59.526852 1915516 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1222 00:26:59.526929 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:59.526945 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:59.526969 1915516 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:26:59.526995 1915516 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:26:59.527121 1915516 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:26:59.527200 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:26:59.535433 1915516 command_runner.go:130] > kubeadm
	I1222 00:26:59.535512 1915516 command_runner.go:130] > kubectl
	I1222 00:26:59.535531 1915516 command_runner.go:130] > kubelet
	I1222 00:26:59.535576 1915516 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:26:59.535650 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:26:59.543039 1915516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:26:59.555537 1915516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:26:59.568459 1915516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 00:26:59.580757 1915516 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:26:59.584332 1915516 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:26:59.584456 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.724165 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:00.346235 1915516 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:27:00.346307 1915516 certs.go:195] generating shared ca certs ...
	I1222 00:27:00.346339 1915516 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:00.346533 1915516 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:27:00.346648 1915516 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:27:00.346710 1915516 certs.go:257] generating profile certs ...
	I1222 00:27:00.346876 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:27:00.347136 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:27:00.347262 1915516 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:27:00.347310 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:27:00.347347 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:27:00.347391 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:27:00.347455 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:27:00.347514 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:27:00.347563 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:27:00.347599 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:27:00.347655 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:27:00.347758 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:27:00.347824 1915516 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:27:00.347874 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:27:00.347933 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:27:00.347992 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:27:00.348063 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:27:00.348147 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:27:00.348210 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem -> /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.348254 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.348294 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.349116 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:27:00.373997 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:27:00.395587 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:27:00.415350 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:27:00.433576 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:27:00.451786 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:27:00.469999 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:27:00.488347 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:27:00.506732 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:27:00.524546 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:27:00.542404 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:27:00.559977 1915516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:27:00.572924 1915516 ssh_runner.go:195] Run: openssl version
	I1222 00:27:00.579133 1915516 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:27:00.579568 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.587905 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:27:00.596398 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600098 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600361 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600432 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.641247 1915516 command_runner.go:130] > 51391683
	I1222 00:27:00.641737 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:27:00.649412 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.657021 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:27:00.664905 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668819 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668897 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668950 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.710718 1915516 command_runner.go:130] > 3ec20f2e
	I1222 00:27:00.711212 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:27:00.718755 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.725802 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:27:00.733002 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736446 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736472 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736534 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.776933 1915516 command_runner.go:130] > b5213941
	I1222 00:27:00.777458 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:27:00.784943 1915516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788657 1915516 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788682 1915516 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:27:00.788688 1915516 command_runner.go:130] > Device: 259,1	Inode: 1585688     Links: 1
	I1222 00:27:00.788695 1915516 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:27:00.788729 1915516 command_runner.go:130] > Access: 2025-12-22 00:22:53.052243967 +0000
	I1222 00:27:00.788738 1915516 command_runner.go:130] > Modify: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788744 1915516 command_runner.go:130] > Change: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788752 1915516 command_runner.go:130] >  Birth: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788843 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:27:00.834300 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.834770 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:27:00.875032 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.875494 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:27:00.916072 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.916544 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:27:00.957168 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.957608 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:27:01.005250 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.005855 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:27:01.046783 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.047595 1915516 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:27:01.047694 1915516 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:27:01.047794 1915516 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:27:01.074422 1915516 cri.go:96] found id: ""
	I1222 00:27:01.074566 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:27:01.081706 1915516 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:27:01.081729 1915516 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:27:01.081736 1915516 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:27:01.082711 1915516 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:27:01.082752 1915516 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:27:01.082809 1915516 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:27:01.090505 1915516 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:27:01.090899 1915516 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-837733" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091003 1915516 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "functional-837733" cluster setting kubeconfig missing "functional-837733" context setting]
	I1222 00:27:01.091279 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.091694 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091853 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.092420 1915516 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:27:01.092441 1915516 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:27:01.092469 1915516 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:27:01.092508 1915516 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:27:01.092519 1915516 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:27:01.092524 1915516 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:27:01.092821 1915516 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:27:01.103318 1915516 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:27:01.103353 1915516 kubeadm.go:602] duration metric: took 20.58853ms to restartPrimaryControlPlane
	I1222 00:27:01.103366 1915516 kubeadm.go:403] duration metric: took 55.780111ms to StartCluster
	I1222 00:27:01.103381 1915516 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.103449 1915516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.104102 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.104313 1915516 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:27:01.104706 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:27:01.104752 1915516 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:27:01.104888 1915516 addons.go:70] Setting storage-provisioner=true in profile "functional-837733"
	I1222 00:27:01.104908 1915516 addons.go:239] Setting addon storage-provisioner=true in "functional-837733"
	I1222 00:27:01.104931 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.105709 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.105852 1915516 addons.go:70] Setting default-storageclass=true in profile "functional-837733"
	I1222 00:27:01.105876 1915516 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-837733"
	I1222 00:27:01.106124 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.111206 1915516 out.go:179] * Verifying Kubernetes components...
	I1222 00:27:01.114015 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:27:01.149988 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.150406 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.150740 1915516 addons.go:239] Setting addon default-storageclass=true in "functional-837733"
	I1222 00:27:01.150781 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.151633 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.160017 1915516 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:27:01.163813 1915516 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.163840 1915516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:27:01.163933 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.208933 1915516 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:01.208958 1915516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:27:01.209032 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.220299 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.259895 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.324483 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:01.352138 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.392769 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.095697 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.095745 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095787 1915516 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095683 1915516 node_ready.go:35] waiting up to 6m0s for node "functional-837733" to be "Ready" ...
	I1222 00:27:02.095893 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.095958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.096162 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.096189 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.096261 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.229628 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.250171 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.299403 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.303146 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.325073 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.325129 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.596516 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.596602 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.596960 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.700363 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.757664 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.757704 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.778883 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.845712 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.845792 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.096069 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.096495 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.596210 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.596328 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.596586 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.599774 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:03.608220 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:03.704291 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.704413 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.724087 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.724140 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.096645 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.096746 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.097060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.097110 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:04.395547 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:04.461482 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.461526 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.596939 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.597016 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.597354 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.908718 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:04.967177 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.967217 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:05.096456 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.096540 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.096895 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.596711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.096915 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.096990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:06.097365 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:06.122623 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:06.180609 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.181800 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 00:27:06.185073 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.244148 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:06.244195 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.596862 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.596990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.597266 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.095976 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.096054 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.596061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.596391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.650649 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:07.682081 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:07.723708 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.723755 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:07.756657 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.756702 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:08.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.096407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.596459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:08.596512 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:09.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.500982 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:09.557414 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:09.557460 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:09.596618 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.596687 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.596949 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.096786 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.096870 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.407596 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:10.466808 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:10.466847 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:10.596231 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.596316 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:10.596666 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:11.096583 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.096658 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.596951 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.597033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.597361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.095930 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.096006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.096359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.596103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.096064 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.096469 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:13.110706 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:13.162639 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:13.165994 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:13.596604 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.597025 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.096902 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.097221 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.596331 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.007048 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:15.075847 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:15.075897 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:15.095981 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.096063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.596111 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.596199 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.596504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.096151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.596451 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.596873 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.096633 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.096711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.097034 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.256498 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:17.320974 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:17.324801 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:17.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.596544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:17.596599 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.096285 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.096359 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.096666 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.596351 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.596417 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.596709 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.596186 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.596288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.596608 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:19.596658 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:20.095993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.596149 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.096326 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.096409 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.096724 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.596667 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.596736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.597043 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:21.597093 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:22.096478 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.096951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.596629 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.596721 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.597082 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.096795 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.096876 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.097130 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.596917 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.596995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.597294 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.941866 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:23.998448 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:23.998497 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:23.998521 1915516 retry.go:84] will retry after 7.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:24.096775 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.096858 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.097179 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.596921 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.596997 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.597304 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.096528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.596241 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.596313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.596669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.096357 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.096423 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.096702 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:26.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.596861 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.597278 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.096413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.596520 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:28.596577 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:29.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.096085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.096330 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.596121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.096205 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.096297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.096674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.321067 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:30.379433 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:30.379480 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.379505 1915516 retry.go:84] will retry after 9.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.596804 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.596878 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.597200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.597254 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:31.096701 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.096780 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.097109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.231597 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:31.293911 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:31.293951 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:31.596615 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.096660 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.596967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.597283 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.597352 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:33.096960 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.097032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.097391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.596091 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.596164 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.596230 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.596325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.596664 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.096343 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.096412 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.096685 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:35.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.096033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.596412 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.596479 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.596727 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.096555 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.096628 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.096973 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:37.097038 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.596788 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.596867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.095946 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.096029 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.096335 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.596466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.096281 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.096646 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.596430 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:40.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.096154 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.098739 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:40.155643 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:40.159357 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.159395 1915516 retry.go:84] will retry after 15s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.096329 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.096400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.096688 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.596638 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.596757 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.597079 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.597130 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:42.096986 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.097069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.101091 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1222 00:27:42.596907 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.596992 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.597277 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.596165 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.096105 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.096430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:44.096472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.596127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.096194 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.096288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.596379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.096150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:46.096534 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.458265 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:46.518675 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:46.518729 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.518755 1915516 retry.go:84] will retry after 28.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.596931 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.597020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.597346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.096075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.596079 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.096276 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.096625 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.096740 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.596414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.596137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.096214 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.096306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.596061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.596464 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.596521 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:51.096372 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.096448 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.596622 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.596716 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.096857 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.096927 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.097248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.096378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:53.096448 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.596524 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.096119 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.096565 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.596116 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.096066 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.096178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:55.096580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.200777 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:55.265479 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:55.265520 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.265546 1915516 retry.go:84] will retry after 39.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.595965 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.596049 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.596359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.095950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.096027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.096297 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.596381 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.596461 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.596751 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.096601 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.096888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.096932 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.596637 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.596710 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.596964 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.096723 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.096792 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.097169 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.596957 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.597032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.597316 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.596476 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:00.096143 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.096238 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.596214 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.596306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.596559 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.097205 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.596188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.596261 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.596603 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.596670 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.096310 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.096394 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.096715 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.596419 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.596515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.596851 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.096613 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.096689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.097037 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.596789 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.596864 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.597113 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.597153 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.096886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.097257 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.595963 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.596038 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.596345 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.596020 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.096257 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.096620 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:06.096682 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.596417 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.596491 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.596755 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.596270 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.596617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:08.596475 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:09.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.596234 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.096409 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.096501 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.096849 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:11.096946 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:11.596858 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.596941 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.597260 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.097034 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.097128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.097525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.596078 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.596158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:13.596572 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:14.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.096071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.678913 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:14.737274 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:14.740868 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:14.740903 1915516 retry.go:84] will retry after 44.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:15.096479 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.096556 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.096856 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.596004 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.596400 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.096510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.096575 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:16.596327 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.596405 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.096642 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.096912 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.596695 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.596773 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.597104 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.096925 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.097003 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.097359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:18.097417 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:18.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.596361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.096435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.596211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.596552 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.096244 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.596453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:20.596506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:21.096297 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.096367 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.096669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:21.596745 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.597117 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.096947 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.097021 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.097379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.596086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.596159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:22.596545 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:23.096015 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.096324 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:23.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.596150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.096052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.096138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.596263 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.596536 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:24.596589 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:25.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:25.596179 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.596271 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.596578 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.096014 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.096355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.596544 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.596874 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:26.596933 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:27.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.097066 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:27.596687 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.596753 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.597008 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.096768 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.096839 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:29.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.096076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.096374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:29.096425 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:29.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.596557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.096284 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.096366 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.096689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.596008 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.596351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:31.096276 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.096353 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.096679 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:31.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:31.596743 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.596816 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.096778 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.096851 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.097100 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.596895 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.596983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.597299 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.096044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.596138 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.596502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:33.596552 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:34.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.096436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:34.454097 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:28:34.514142 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514187 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514269 1915516 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:34.596441 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.596529 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.596855 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.096655 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.096737 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.097056 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.596959 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.597310 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:35.597390 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:36.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.096484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:36.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.596568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.096447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:38.096490 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:38.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.096163 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.596431 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:40.096080 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.096167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:40.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:40.596326 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.596447 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.596781 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.096558 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.096630 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.096876 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.596824 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.597258 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:42.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.096146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.097461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1222 00:28:42.097586 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:42.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.096540 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.596134 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.596546 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.596128 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.596215 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.596558 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:44.596612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:45.096183 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.096278 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.096726 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:45.596446 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.596802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.096553 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.096624 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.096959 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.596832 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:46.597275 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:47.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.095989 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.096242 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:47.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.596082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.596368 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.096094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.096173 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:49.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.096118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.096389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:49.096433 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:49.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.596170 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.596486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.096103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.096376 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:51.096359 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.096467 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.096794 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:51.096869 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:51.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.596678 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.596972 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.096756 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.096859 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.097229 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.595971 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.596373 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.096394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.596060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:53.596547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:54.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.096525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:54.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.596398 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.596109 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.596510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:56.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.096414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:56.096453 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:56.596382 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.596455 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.596777 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.096597 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.096690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.097021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.596765 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.596835 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.597084 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:58.096904 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.096994 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.097340 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:58.097391 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:58.596044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.596153 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.061108 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:59.096537 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.096605 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.096887 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.132532 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138577 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138673 1915516 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:59.141746 1915516 out.go:179] * Enabled addons: 
	I1222 00:28:59.145405 1915516 addons.go:530] duration metric: took 1m58.040642899s for enable addons: enabled=[]
	I1222 00:28:59.596293 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.596374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.596770 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.096670 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.097201 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.595928 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.596014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:00.596370 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:01.096378 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.096452 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.096753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:01.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.596676 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.596994 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.096700 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.096774 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.097108 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.596912 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.596985 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.597314 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:02.597388 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:03.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:03.596014 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.596079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.096091 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.596416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:05.095995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:05.096466 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:05.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.096196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.096324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.096662 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.596476 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.596558 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.596812 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:07.096580 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.096655 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.096995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:07.097052 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:07.596669 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.597087 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.096888 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.096974 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.595949 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.596080 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.095985 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.096064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.096372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.596027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.596300 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:09.596339 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:10.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.096125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.096502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:10.596205 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.596303 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.596585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.096568 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.096641 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.096925 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.596903 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.596980 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.597315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:11.597389 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:12.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.096485 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:12.595972 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.596042 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.596348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.596162 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.596244 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.596528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:14.096197 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:14.096594 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:14.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.596443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.096585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.596256 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.596324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.596577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.596455 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.596531 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.596865 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:16.596917 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.096729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:17.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.596427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.596236 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.596315 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.596584 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:19.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:19.096504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:19.596189 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.596268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.596595 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.096286 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.096362 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:21.096400 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.096769 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:21.096815 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:21.596733 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.597125 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.096830 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.096913 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.097244 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.596138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.596522 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:23.596485 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:24.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.096081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:24.596151 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.596506 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.096466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.596494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:25.596557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:26.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.096391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:26.596314 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.596400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.596729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.096554 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.096633 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.097006 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.596775 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.596844 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:27.597137 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:28.096938 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.097014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.097401 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:28.595911 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.595983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.596321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.095997 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.096068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.096361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:30.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.096489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:30.096547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:30.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.596327 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.096344 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.096420 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.596746 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.596817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.597107 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:32.096894 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.096967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:32.097281 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:32.596869 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.596969 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.597275 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.096111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.596315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.096102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.096423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.596130 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:34.596568 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:35.096117 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:35.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.596074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.596449 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.596524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.596791 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:36.596841 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:37.096628 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.096714 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.097074 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:37.596870 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.596949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.597264 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.096987 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.097079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.097412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.596255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:39.096335 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.096415 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.096735 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:39.096791 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:39.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.596081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.596386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.096071 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.096512 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.596469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.096427 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.096493 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.096765 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.596776 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.596860 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.597207 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:41.597262 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:42.097196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.097290 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.097753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:42.596494 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.596559 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.596818 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.096412 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.096485 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.096813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.596584 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.596660 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.596988 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:44.096629 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.096696 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:44.097007 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:44.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.596896 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.597195 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.097028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.097117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.097583 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.596273 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.596360 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.596616 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.096043 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:46.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:47.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.096065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.096384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:47.596069 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.096477 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.596372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:49.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:49.096506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:49.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.596446 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.096191 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.596046 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:51.096262 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.096340 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.096671 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:51.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:51.596674 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.597007 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.096802 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.096883 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.097238 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.596097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:53.596472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:54.096190 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.096617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:54.595974 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.596313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.096097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.596182 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.596258 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.596563 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:55.596609 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:56.096131 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:56.596450 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.596522 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.596813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:58.096113 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.096201 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:58.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:58.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.096348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:00.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.096548 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:00.096612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:00.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.096631 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.096833 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.596402 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.596486 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.596923 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:02.096741 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.096817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.097099 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:02.097145 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:02.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.596554 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.596891 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.096678 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.096749 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.097049 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.596893 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.597160 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:04.096926 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.097001 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.097385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:04.097441 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:04.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.096136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.596009 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.596350 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:06.596396 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:07.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:07.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.596115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.096165 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:08.596499 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:09.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.096074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:09.596090 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.596516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.096073 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.096161 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:11.096517 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.096604 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:11.097278 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:11.596969 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.597372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.096147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.596282 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.596358 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.096032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.096445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.596479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:13.596523 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:14.096010 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.096415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:14.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.096501 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.596363 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:16.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:16.096639 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:16.596380 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.596453 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.596745 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.096475 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.096802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.596630 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.596731 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.597138 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:18.096998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.097081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.097420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:18.097478 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:18.596096 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.096545 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.096076 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:20.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:21.096445 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.096524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.096840 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:21.596761 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.596832 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.597105 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.096883 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.096964 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.097271 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.596982 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.597056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.597440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:22.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:23.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.596157 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.596226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:25.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:25.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:25.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.596100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.596424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.096120 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.096534 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.596546 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.596888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:27.096591 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.096677 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.097053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:27.097108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:27.596848 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.596955 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.596475 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.096142 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.096209 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:29.596527 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:30.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:30.596200 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.596525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.096447 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.096541 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.596722 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.596793 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.597128 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:31.597181 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:32.096921 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.096995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:32.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.596177 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.596451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:34.096133 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.096208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:34.096604 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:34.596306 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.596398 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.596763 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.096549 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.096621 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.096956 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.596693 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.596770 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:36.096908 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.096988 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.097351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:36.097407 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:36.596345 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.596419 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.596680 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.096197 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.596567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:38.596620 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:39.096295 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.096370 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.596381 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.096092 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.596269 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.596348 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.596667 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:40.596717 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:41.096474 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.096825 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:41.596734 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.596808 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.597164 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.096930 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.097022 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.596367 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:43.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:43.096482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:43.596115 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.596196 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.596500 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.096056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.096329 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.595987 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.596060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:45.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.096182 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.096632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:45.096698 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:45.596155 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.596221 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.096083 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.596090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.096110 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.096527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.596194 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.596265 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.596618 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:47.596676 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:48.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.096139 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:48.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.596328 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.096096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.596175 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.596561 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:50.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.096061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.096371 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:50.096418 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:50.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.596449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.096398 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.096473 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.096792 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.596692 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.596763 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.597029 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:52.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.096954 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.097313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:52.097385 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:52.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.596133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.095986 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.096060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.096450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.596389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:54.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:55.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.096554 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:55.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.596140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.096255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.596413 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.596495 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.597103 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:56.597155 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:57.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.096939 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.097279 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:57.596919 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.596998 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.597248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.096981 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.097060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.595936 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.596020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.596366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:59.096048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:59.096443 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:59.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.596437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.112227 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.112324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.112677 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.596304 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.596376 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.596639 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:01.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.096650 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.096999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:01.097053 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:01.596950 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.597377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.096463 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.096586 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.096933 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.596593 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.596668 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.597021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:03.096676 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.096752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.097161 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:03.097215 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:03.596799 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.596879 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.597210 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.095999 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.596068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.096006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.096369 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.596027 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.596106 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.596440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:05.596494 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:06.096082 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.096460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:06.596324 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.596435 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.596689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.596220 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.596550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:07.596603 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:08.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.096360 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:08.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.096598 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.596123 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.596195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:10.096085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.096519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:10.096576 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:10.596225 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.596297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.096706 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.097010 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.596932 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.597006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.597391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.096467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:12.596356 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:13.096021 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:13.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.096148 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.096479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.596057 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:14.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:15.096068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.096144 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:15.596144 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.596224 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.596509 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.596457 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.596533 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.596848 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:16.596913 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:17.096611 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.096699 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.097003 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:17.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.596727 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.597060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.096680 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.096762 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.097085 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.596689 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.596752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.597053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:18.597096 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:19.096874 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.096949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.097291 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:19.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.096486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.596098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.596435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:21.096417 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.096492 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.096805 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:21.096849 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:21.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.596686 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.596951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.096805 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.096887 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.097262 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.596005 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.596438 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.096077 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.096377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:23.596497 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:24.096207 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.096291 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:24.595992 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.596065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.596365 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.096162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.096470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:25.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:26.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.096259 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:26.596641 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.596715 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.597033 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.096832 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.096920 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.097265 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:28.096050 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:28.096502 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:28.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.095982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.096048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.096296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:30.096147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.096226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.096531 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:30.096584 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:30.595996 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.096270 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.596652 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.596724 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.597083 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:32.096810 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.096900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.097155 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:32.097194 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:32.595938 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.596018 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.596399 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.096514 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.096448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.596176 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.596250 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.596573 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:34.596630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:35.096275 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.096626 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:35.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.596118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.595986 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.596055 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.596358 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:37.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:37.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:37.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.596167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.596423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:39.596461 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:40.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:40.596239 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.596317 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.596658 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.096446 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.096515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.096773 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.596719 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.596796 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.597144 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:41.597198 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:42.095939 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.096035 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:42.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.596396 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.596213 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.596294 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.596619 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:44.096156 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.096508 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:44.096557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.096206 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.096300 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.096704 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.596071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.596317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.096028 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.096449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.596566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:46.596617 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:47.095989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.096301 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:47.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.096107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:49.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.096455 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:49.096507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:49.596031 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.096219 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.596217 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.596292 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.596622 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:51.096688 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.096769 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.097192 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:51.097261 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:51.595908 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.595986 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.596268 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.095979 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.596273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.596678 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.096355 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.096427 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.596491 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:53.596544 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:54.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.096480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:54.596158 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.096042 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.596488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:56.096195 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.096268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:56.096624 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:56.596612 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.597015 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.096867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.097193 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.595906 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.595972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.596218 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:58.097085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.097172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.097567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:58.097619 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:58.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.596241 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.596588 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.096257 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.096325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.096568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.596240 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.596636 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.096222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.096550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.596394 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.596465 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.596730 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:00.596772 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:01.096774 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.096885 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.097366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:01.595973 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.597013 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.096796 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.096863 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.097153 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.596892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.596965 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:02.597411 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:03.095937 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.096024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.096410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:03.596093 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.596163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.596410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.596053 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.596474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:05.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.096107 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:05.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:05.596083 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.596157 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.096062 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.096137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.096505 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.596315 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.596381 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.596649 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:07.096321 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.096392 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:07.096788 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:07.596571 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.596648 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.596948 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.096706 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.096778 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.097061 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.596819 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.596900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.597237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.095966 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.096044 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.596095 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:09.596474 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:10.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:10.596067 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.596141 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.096744 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.096823 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.097172 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.595924 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.596004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.596347 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.096155 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:12.096542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:12.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.596092 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.096106 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:14.596542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:15.096232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.096651 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:15.596333 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.596407 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.596674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.096354 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.096430 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.096766 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.596753 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.596828 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.597170 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:16.597226 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:17.096933 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.097004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.097253 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:17.595951 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.596024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.095977 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.096058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.096393 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.596069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:19.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.096412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:19.096467 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:19.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.096172 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.096518 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.596013 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.596415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:21.096789 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.096865 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.097200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:21.097256 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:21.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.596143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.596421 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.096046 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.096346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:23.596435 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:24.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.096108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:24.596160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.596230 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.596159 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.596240 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.596576 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:25.596629 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:26.096303 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.096374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.096637 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:26.596570 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.596649 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.596995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.096704 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.596727 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.596794 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.597086 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:27.597135 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:28.096901 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.096979 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.097317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:28.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.096139 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.096211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.096453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:30.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.096532 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:30.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:30.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.596179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.096503 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.096583 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.596830 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.597245 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.095958 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.096033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.096334 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.596022 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:32.596482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:33.096166 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.096574 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:33.596105 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.096492 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.596212 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.596289 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.596632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:34.596687 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:35.096345 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.096429 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.096712 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:35.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.596142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.596460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.096237 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.596663 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.596739 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.597059 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:36.597108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:37.096872 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:37.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.596113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:39.096175 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:39.096631 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:39.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.096140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.096474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.596119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:41.096370 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.096433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:41.096759 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:41.596613 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.596690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.596999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.096724 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.096822 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.097289 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.596970 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.597040 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.597359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.096020 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.595988 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.596385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:43.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:44.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:44.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.096209 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.096309 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.096703 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.595999 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.596318 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:46.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:46.096587 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:46.596358 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.596433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.096505 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.096569 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.096863 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.597077 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:48.096892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.096972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:48.097376 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:48.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.096001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.596021 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.596390 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.096418 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.595985 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.596374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:50.596428 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:51.096388 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.096466 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.096827 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:51.596847 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.596923 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.597181 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.095961 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.096037 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.096403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.596120 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.596192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.596527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:52.596580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:53.096225 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.096307 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:53.596232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.596643 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.096242 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.096330 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.096701 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.596365 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.596434 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.596698 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:54.596749 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:55.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:55.596071 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.596147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.096173 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.596488 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.596560 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.596914 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:56.596966 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:57.096719 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.096795 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.097343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:57.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.596070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.096471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:59.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.096321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:59.096369 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:59.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.096235 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.096597 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.596299 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.596383 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.596831 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:01.096422 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.096517 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.096896 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:33:01.096952 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:33:01.596985 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.597111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.597684 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:02.096224 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:02.096288 1915516 node_ready.go:38] duration metric: took 6m0.000473787s for node "functional-837733" to be "Ready" ...
	I1222 00:33:02.099371 1915516 out.go:203] 
	W1222 00:33:02.102245 1915516 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:33:02.102270 1915516 out.go:285] * 
	W1222 00:33:02.104397 1915516 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:33:02.107416 1915516 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.215802098Z" level=info msg="Using the internal default seccomp profile"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.21580977Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.215814963Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.215825138Z" level=info msg="RDT not available in the host system"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.215839906Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.21663043Z" level=info msg="Conmon does support the --sync option"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.216657564Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.216672718Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.217304051Z" level=info msg="Conmon does support the --sync option"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.217387167Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.217509526Z" level=info msg="Updated default CNI network name to "
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.218199729Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.218658686Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.218721314Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282258133Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282418842Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282513739Z" level=info msg="Create NRI interface"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282663019Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282683572Z" level=info msg="runtime interface created"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282699145Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282705364Z" level=info msg="runtime interface starting up..."
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282711313Z" level=info msg="starting plugins..."
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282726615Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:26:59 functional-837733 crio[5399]: time="2025-12-22T00:26:59.282801854Z" level=info msg="No systemd watchdog enabled"
	Dec 22 00:26:59 functional-837733 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:33:06.435163    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:06.435963    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:06.437647    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:06.438297    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:06.439908    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:33:06 up 1 day,  7:15,  0 user,  load average: 0.02, 0.21, 0.75
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:33:03 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:04 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 22 00:33:04 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:04 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:04 functional-837733 kubelet[8645]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:04 functional-837733 kubelet[8645]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:04 functional-837733 kubelet[8645]: E1222 00:33:04.402186    8645 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:04 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:04 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:05 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 22 00:33:05 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:05 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:05 functional-837733 kubelet[8666]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:05 functional-837733 kubelet[8666]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:05 functional-837733 kubelet[8666]: E1222 00:33:05.149108    8666 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:05 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:05 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:05 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 22 00:33:05 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:05 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:05 functional-837733 kubelet[8687]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:05 functional-837733 kubelet[8687]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:05 functional-837733 kubelet[8687]: E1222 00:33:05.919532    8687 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:05 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:05 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (329.454971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 kubectl -- --context functional-837733 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 kubectl -- --context functional-837733 get pods: exit status 1 (101.114977ms)

                                                
                                                
** stderr ** 
	E1222 00:33:14.314590 1920782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:14.314987 1920782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:14.316398 1920782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:14.316668 1920782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:14.318029 1920782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-837733 kubectl -- --context functional-837733 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (303.1613ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 logs -n 25: (1.052764409s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-885873 image ls --format json --alsologtostderr                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format table --alsologtostderr                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr                                          │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ delete         │ -p functional-885873                                                                                                                            │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ start          │ -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ start          │ -p functional-837733 --alsologtostderr -v=8                                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:26 UTC │                     │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:latest                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add minikube-local-cache-test:functional-837733                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache delete minikube-local-cache-test:functional-837733                                                                      │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl images                                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ cache          │ functional-837733 cache reload                                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ kubectl        │ functional-837733 kubectl -- --context functional-837733 get pods                                                                               │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:26:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:26:56.306253 1915516 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:26:56.306390 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306407 1915516 out.go:374] Setting ErrFile to fd 2...
	I1222 00:26:56.306413 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306666 1915516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:26:56.307062 1915516 out.go:368] Setting JSON to false
	I1222 00:26:56.307917 1915516 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112166,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:26:56.307982 1915516 start.go:143] virtualization:  
	I1222 00:26:56.313423 1915516 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:26:56.316413 1915516 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:26:56.316480 1915516 notify.go:221] Checking for updates...
	I1222 00:26:56.322104 1915516 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:26:56.324851 1915516 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:26:56.327697 1915516 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:26:56.330483 1915516 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:26:56.333306 1915516 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:26:56.336519 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:56.336644 1915516 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:26:56.361435 1915516 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:26:56.361557 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.415190 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.406057371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.415295 1915516 docker.go:319] overlay module found
	I1222 00:26:56.418305 1915516 out.go:179] * Using the docker driver based on existing profile
	I1222 00:26:56.421198 1915516 start.go:309] selected driver: docker
	I1222 00:26:56.421214 1915516 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.421302 1915516 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:26:56.421450 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.481024 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.471849294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.481474 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:56.481538 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:56.481593 1915516 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.484769 1915516 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:26:56.487560 1915516 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:26:56.490463 1915516 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:26:56.493542 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:56.493593 1915516 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:26:56.493605 1915516 cache.go:65] Caching tarball of preloaded images
	I1222 00:26:56.493623 1915516 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:26:56.493685 1915516 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:26:56.493695 1915516 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:26:56.493807 1915516 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:26:56.512598 1915516 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:26:56.512623 1915516 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:26:56.512639 1915516 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:26:56.512668 1915516 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:26:56.512725 1915516 start.go:364] duration metric: took 34.567µs to acquireMachinesLock for "functional-837733"
	I1222 00:26:56.512749 1915516 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:26:56.512754 1915516 fix.go:54] fixHost starting: 
	I1222 00:26:56.513015 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:26:56.528652 1915516 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:26:56.528683 1915516 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:26:56.531906 1915516 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:26:56.531939 1915516 machine.go:94] provisionDockerMachine start ...
	I1222 00:26:56.532017 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.548171 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.548493 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.548508 1915516 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:26:56.684687 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.684712 1915516 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:26:56.684788 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.701819 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.702143 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.702159 1915516 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:26:56.841689 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.841784 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.859107 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.859418 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.859440 1915516 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:26:56.989653 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:26:56.989683 1915516 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:26:56.989702 1915516 ubuntu.go:190] setting up certificates
	I1222 00:26:56.989718 1915516 provision.go:84] configureAuth start
	I1222 00:26:56.989779 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:57.009887 1915516 provision.go:143] copyHostCerts
	I1222 00:26:57.009938 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.009973 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:26:57.009985 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.010062 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:26:57.010160 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010187 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:26:57.010205 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010234 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:26:57.010285 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010306 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:26:57.010311 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010345 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:26:57.010415 1915516 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:26:57.201216 1915516 provision.go:177] copyRemoteCerts
	I1222 00:26:57.201293 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:26:57.201364 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.220369 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.317164 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:26:57.317296 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:26:57.334371 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:26:57.334443 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:26:57.355914 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:26:57.356016 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:26:57.374427 1915516 provision.go:87] duration metric: took 384.686066ms to configureAuth
	I1222 00:26:57.374456 1915516 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:26:57.374673 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:57.374803 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.396901 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:57.397214 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:57.397228 1915516 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:26:57.748387 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:26:57.748426 1915516 machine.go:97] duration metric: took 1.216463326s to provisionDockerMachine
	I1222 00:26:57.748438 1915516 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:26:57.748450 1915516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:26:57.748516 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:26:57.748562 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.764805 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.860955 1915516 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:26:57.863953 1915516 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:26:57.863971 1915516 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:26:57.863984 1915516 command_runner.go:130] > VERSION_ID="12"
	I1222 00:26:57.863988 1915516 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:26:57.863993 1915516 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:26:57.863997 1915516 command_runner.go:130] > ID=debian
	I1222 00:26:57.864002 1915516 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:26:57.864007 1915516 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:26:57.864014 1915516 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:26:57.864062 1915516 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:26:57.864078 1915516 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:26:57.864088 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:26:57.864138 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:26:57.864212 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:26:57.864218 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /etc/ssl/certs/18812192.pem
	I1222 00:26:57.864290 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:26:57.864295 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> /etc/test/nested/copy/1881219/hosts
	I1222 00:26:57.864343 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:26:57.871401 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:26:57.887952 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:26:57.904770 1915516 start.go:296] duration metric: took 156.318697ms for postStartSetup
	I1222 00:26:57.904856 1915516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:26:57.904900 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.924440 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.018813 1915516 command_runner.go:130] > 12%
	I1222 00:26:58.018908 1915516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:26:58.024077 1915516 command_runner.go:130] > 172G
	I1222 00:26:58.024155 1915516 fix.go:56] duration metric: took 1.511397372s for fixHost
	I1222 00:26:58.024173 1915516 start.go:83] releasing machines lock for "functional-837733", held for 1.511435197s
	I1222 00:26:58.024244 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:58.041457 1915516 ssh_runner.go:195] Run: cat /version.json
	I1222 00:26:58.041501 1915516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:26:58.041514 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.041557 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.062715 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.064581 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.169068 1915516 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:26:58.266136 1915516 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:26:58.269129 1915516 ssh_runner.go:195] Run: systemctl --version
	I1222 00:26:58.275371 1915516 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:26:58.275406 1915516 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:26:58.275466 1915516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:26:58.310186 1915516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:26:58.314515 1915516 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:26:58.314563 1915516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:26:58.314620 1915516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:26:58.322376 1915516 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:26:58.322401 1915516 start.go:496] detecting cgroup driver to use...
	I1222 00:26:58.322452 1915516 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:26:58.322524 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:26:58.337747 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:26:58.350732 1915516 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:26:58.350819 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:26:58.365932 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:26:58.378579 1915516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:26:58.492803 1915516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:26:58.604423 1915516 docker.go:234] disabling docker service ...
	I1222 00:26:58.604496 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:26:58.619028 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:26:58.631440 1915516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:26:58.747823 1915516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:26:58.866539 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:26:58.885381 1915516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:26:58.901152 1915516 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1222 00:26:58.902678 1915516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:26:58.902778 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.912460 1915516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:26:58.912567 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.924743 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.935404 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.944063 1915516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:26:58.951908 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.960893 1915516 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.969826 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.978699 1915516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:26:58.985309 1915516 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:26:58.986567 1915516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:26:58.993813 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.112143 1915516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:26:59.288646 1915516 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:26:59.288768 1915516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:26:59.292480 1915516 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1222 00:26:59.292507 1915516 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:26:59.292515 1915516 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1222 00:26:59.292522 1915516 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:26:59.292531 1915516 command_runner.go:130] > Access: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292537 1915516 command_runner.go:130] > Modify: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292542 1915516 command_runner.go:130] > Change: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292549 1915516 command_runner.go:130] >  Birth: -
	I1222 00:26:59.292591 1915516 start.go:564] Will wait 60s for crictl version
	I1222 00:26:59.292646 1915516 ssh_runner.go:195] Run: which crictl
	I1222 00:26:59.295996 1915516 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:26:59.296252 1915516 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:26:59.318308 1915516 command_runner.go:130] > Version:  0.1.0
	I1222 00:26:59.318367 1915516 command_runner.go:130] > RuntimeName:  cri-o
	I1222 00:26:59.318402 1915516 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1222 00:26:59.318421 1915516 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:26:59.320537 1915516 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:26:59.320652 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.347108 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.347172 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.347193 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.347237 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.347255 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.347273 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.347296 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.347319 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.347336 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.347352 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.347374 1915516 command_runner.go:130] >      static
	I1222 00:26:59.347390 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.347405 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.347421 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.347442 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.347466 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.347483 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.347500 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.347516 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.347534 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.350153 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.376069 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.376144 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.376164 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.376182 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.376201 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.376235 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.376261 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.376279 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.376305 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.376329 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.376347 1915516 command_runner.go:130] >      static
	I1222 00:26:59.376375 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.376391 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.376409 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.376433 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.376462 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.376479 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.376495 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.376513 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.376545 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.383960 1915516 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:26:59.386763 1915516 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:26:59.402614 1915516 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:26:59.406608 1915516 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:26:59.406693 1915516 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:26:59.406800 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:59.406861 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.440185 1915516 command_runner.go:130] > {
	I1222 00:26:59.440203 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.440208 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440216 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.440220 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440226 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.440230 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440234 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440243 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.440251 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.440254 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440259 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.440263 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440270 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440273 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440277 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440284 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.440288 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440301 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.440305 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440309 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440317 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.440326 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.440329 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440333 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.440337 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440344 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440347 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440350 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440357 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.440361 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440366 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.440369 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440373 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440381 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.440388 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.440392 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440396 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.440404 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.440408 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440411 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440414 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440421 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.440424 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440429 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.440433 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440437 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440445 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.440452 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.440455 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440459 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.440466 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440470 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440474 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440482 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440486 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440489 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440492 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440499 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.440502 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440509 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.440512 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440516 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440524 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.440532 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.440535 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440539 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.440542 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440546 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440549 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440553 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440557 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440560 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440563 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440569 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.440573 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440578 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.440582 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440586 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440594 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.440602 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.440606 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440610 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.440613 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440624 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440627 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440631 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440634 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440637 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440640 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440647 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.440651 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440656 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.440659 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440663 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440671 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.440679 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.440681 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440685 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.440689 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440693 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440697 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440700 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440706 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.440710 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440714 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.440718 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440721 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440729 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.440745 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.440748 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440752 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.440755 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440759 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440762 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440765 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440769 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440775 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440779 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440785 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.440789 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440793 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.440797 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440800 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440808 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.440816 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.440819 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440842 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.440846 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440850 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.440853 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440857 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440860 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.440863 1915516 command_runner.go:130] >     }
	I1222 00:26:59.440865 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.440868 1915516 command_runner.go:130] > }
	I1222 00:26:59.441031 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.441039 1915516 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:26:59.441099 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.463515 1915516 command_runner.go:130] > {
	I1222 00:26:59.463533 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.463538 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463547 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.463552 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463566 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.463570 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463574 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463582 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.463590 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.463594 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463598 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.463602 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463606 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463610 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463613 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463619 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.463624 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463629 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.463632 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463636 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463655 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.463664 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.463667 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463671 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.463675 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463680 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463683 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463686 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463693 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.463697 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463703 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.463706 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463710 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463718 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.463726 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.463730 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463734 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.463741 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.463745 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463748 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463752 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463758 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.463761 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463767 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.463771 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463775 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463782 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.463790 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.463793 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463797 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.463801 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463805 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463808 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463816 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463819 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463823 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463826 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463832 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.463836 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463841 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.463844 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463848 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463855 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.463863 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.463867 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463871 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.463875 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463878 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463882 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463885 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463893 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463896 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463899 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463905 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.463909 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463914 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.463917 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463921 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463929 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.463937 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.463940 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463944 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.463947 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463951 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463954 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463958 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463962 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463965 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463968 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463974 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.463979 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463984 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.463987 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463991 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463998 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.464006 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.464009 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464013 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.464017 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464021 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464024 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464027 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464033 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.464040 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464046 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.464050 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464053 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464061 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.464078 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.464081 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464085 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.464089 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464093 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.464096 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464100 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464103 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464106 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464109 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464115 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.464119 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464123 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.464126 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464130 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464137 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.464145 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.464148 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464151 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.464155 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464158 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.464161 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464165 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464168 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.464171 1915516 command_runner.go:130] >     }
	I1222 00:26:59.464174 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.464176 1915516 command_runner.go:130] > }
	I1222 00:26:59.467117 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.467182 1915516 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:26:59.467204 1915516 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:26:59.467349 1915516 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:26:59.467469 1915516 ssh_runner.go:195] Run: crio config
	I1222 00:26:59.517634 1915516 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1222 00:26:59.517672 1915516 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1222 00:26:59.517681 1915516 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1222 00:26:59.517684 1915516 command_runner.go:130] > #
	I1222 00:26:59.517692 1915516 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1222 00:26:59.517698 1915516 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1222 00:26:59.517704 1915516 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1222 00:26:59.517712 1915516 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1222 00:26:59.517716 1915516 command_runner.go:130] > # reload'.
	I1222 00:26:59.517723 1915516 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1222 00:26:59.517729 1915516 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1222 00:26:59.517735 1915516 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1222 00:26:59.517742 1915516 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1222 00:26:59.517745 1915516 command_runner.go:130] > [crio]
	I1222 00:26:59.517751 1915516 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1222 00:26:59.517756 1915516 command_runner.go:130] > # containers images, in this directory.
	I1222 00:26:59.517994 1915516 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1222 00:26:59.518006 1915516 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1222 00:26:59.518014 1915516 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1222 00:26:59.518022 1915516 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1222 00:26:59.518026 1915516 command_runner.go:130] > # imagestore = ""
	I1222 00:26:59.518032 1915516 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1222 00:26:59.518043 1915516 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1222 00:26:59.518048 1915516 command_runner.go:130] > # storage_driver = "overlay"
	I1222 00:26:59.518053 1915516 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1222 00:26:59.518059 1915516 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1222 00:26:59.518209 1915516 command_runner.go:130] > # storage_option = [
	I1222 00:26:59.518216 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.518229 1915516 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1222 00:26:59.518237 1915516 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1222 00:26:59.518375 1915516 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1222 00:26:59.518385 1915516 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1222 00:26:59.518392 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1222 00:26:59.518406 1915516 command_runner.go:130] > # always happen on a node reboot
	I1222 00:26:59.518412 1915516 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1222 00:26:59.518638 1915516 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1222 00:26:59.518654 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1222 00:26:59.518660 1915516 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1222 00:26:59.521919 1915516 command_runner.go:130] > # version_file_persist = ""
	I1222 00:26:59.521945 1915516 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1222 00:26:59.521986 1915516 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1222 00:26:59.521998 1915516 command_runner.go:130] > # internal_wipe = true
	I1222 00:26:59.522012 1915516 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1222 00:26:59.522023 1915516 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1222 00:26:59.522028 1915516 command_runner.go:130] > # internal_repair = true
	I1222 00:26:59.522049 1915516 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1222 00:26:59.522075 1915516 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1222 00:26:59.522087 1915516 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1222 00:26:59.522103 1915516 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1222 00:26:59.522130 1915516 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1222 00:26:59.522140 1915516 command_runner.go:130] > [crio.api]
	I1222 00:26:59.522156 1915516 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1222 00:26:59.522172 1915516 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1222 00:26:59.522199 1915516 command_runner.go:130] > # IP address on which the stream server will listen.
	I1222 00:26:59.522210 1915516 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1222 00:26:59.522227 1915516 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1222 00:26:59.522243 1915516 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1222 00:26:59.522251 1915516 command_runner.go:130] > # stream_port = "0"
	I1222 00:26:59.522257 1915516 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1222 00:26:59.522262 1915516 command_runner.go:130] > # stream_enable_tls = false
	I1222 00:26:59.522297 1915516 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1222 00:26:59.522308 1915516 command_runner.go:130] > # stream_idle_timeout = ""
	I1222 00:26:59.522315 1915516 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1222 00:26:59.522330 1915516 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522334 1915516 command_runner.go:130] > # stream_tls_cert = ""
	I1222 00:26:59.522352 1915516 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1222 00:26:59.522421 1915516 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522433 1915516 command_runner.go:130] > # stream_tls_key = ""
	I1222 00:26:59.522440 1915516 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1222 00:26:59.522455 1915516 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1222 00:26:59.522468 1915516 command_runner.go:130] > # automatically pick up the changes.
	I1222 00:26:59.522477 1915516 command_runner.go:130] > # stream_tls_ca = ""
	I1222 00:26:59.522505 1915516 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522519 1915516 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1222 00:26:59.522527 1915516 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522540 1915516 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1222 00:26:59.522547 1915516 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1222 00:26:59.522556 1915516 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1222 00:26:59.522560 1915516 command_runner.go:130] > [crio.runtime]
	I1222 00:26:59.522574 1915516 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1222 00:26:59.522584 1915516 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1222 00:26:59.522588 1915516 command_runner.go:130] > # "nofile=1024:2048"
	I1222 00:26:59.522599 1915516 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1222 00:26:59.522607 1915516 command_runner.go:130] > # default_ulimits = [
	I1222 00:26:59.522611 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522624 1915516 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1222 00:26:59.522631 1915516 command_runner.go:130] > # no_pivot = false
	I1222 00:26:59.522642 1915516 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1222 00:26:59.522652 1915516 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1222 00:26:59.522657 1915516 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1222 00:26:59.522672 1915516 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1222 00:26:59.522681 1915516 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1222 00:26:59.522689 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522700 1915516 command_runner.go:130] > # conmon = ""
	I1222 00:26:59.522705 1915516 command_runner.go:130] > # Cgroup setting for conmon
	I1222 00:26:59.522717 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1222 00:26:59.522721 1915516 command_runner.go:130] > conmon_cgroup = "pod"
	I1222 00:26:59.522733 1915516 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1222 00:26:59.522741 1915516 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1222 00:26:59.522760 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522767 1915516 command_runner.go:130] > # conmon_env = [
	I1222 00:26:59.522770 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522776 1915516 command_runner.go:130] > # Additional environment variables to set for all the
	I1222 00:26:59.522786 1915516 command_runner.go:130] > # containers. These are overridden if set in the
	I1222 00:26:59.522796 1915516 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1222 00:26:59.522803 1915516 command_runner.go:130] > # default_env = [
	I1222 00:26:59.522807 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522813 1915516 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1222 00:26:59.522829 1915516 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1222 00:26:59.522839 1915516 command_runner.go:130] > # selinux = false
	I1222 00:26:59.522846 1915516 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1222 00:26:59.522861 1915516 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1222 00:26:59.522870 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522874 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.522886 1915516 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1222 00:26:59.522895 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522900 1915516 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1222 00:26:59.522911 1915516 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1222 00:26:59.522926 1915516 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1222 00:26:59.522936 1915516 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1222 00:26:59.522948 1915516 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1222 00:26:59.522956 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522962 1915516 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1222 00:26:59.522976 1915516 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1222 00:26:59.522984 1915516 command_runner.go:130] > # the cgroup blockio controller.
	I1222 00:26:59.522992 1915516 command_runner.go:130] > # blockio_config_file = ""
	I1222 00:26:59.522999 1915516 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1222 00:26:59.523010 1915516 command_runner.go:130] > # blockio parameters.
	I1222 00:26:59.523015 1915516 command_runner.go:130] > # blockio_reload = false
	I1222 00:26:59.523027 1915516 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1222 00:26:59.523033 1915516 command_runner.go:130] > # irqbalance daemon.
	I1222 00:26:59.523049 1915516 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1222 00:26:59.523059 1915516 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1222 00:26:59.523067 1915516 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1222 00:26:59.523081 1915516 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1222 00:26:59.523091 1915516 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1222 00:26:59.523103 1915516 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1222 00:26:59.523112 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523116 1915516 command_runner.go:130] > # rdt_config_file = ""
	I1222 00:26:59.523122 1915516 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1222 00:26:59.523133 1915516 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1222 00:26:59.523140 1915516 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1222 00:26:59.523148 1915516 command_runner.go:130] > # separate_pull_cgroup = ""
	I1222 00:26:59.523159 1915516 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1222 00:26:59.523202 1915516 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1222 00:26:59.523210 1915516 command_runner.go:130] > # will be added.
	I1222 00:26:59.523215 1915516 command_runner.go:130] > # default_capabilities = [
	I1222 00:26:59.523219 1915516 command_runner.go:130] > # 	"CHOWN",
	I1222 00:26:59.523223 1915516 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1222 00:26:59.523380 1915516 command_runner.go:130] > # 	"FSETID",
	I1222 00:26:59.523391 1915516 command_runner.go:130] > # 	"FOWNER",
	I1222 00:26:59.523395 1915516 command_runner.go:130] > # 	"SETGID",
	I1222 00:26:59.523399 1915516 command_runner.go:130] > # 	"SETUID",
	I1222 00:26:59.523427 1915516 command_runner.go:130] > # 	"SETPCAP",
	I1222 00:26:59.523432 1915516 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1222 00:26:59.523436 1915516 command_runner.go:130] > # 	"KILL",
	I1222 00:26:59.523439 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523448 1915516 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1222 00:26:59.523464 1915516 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1222 00:26:59.523470 1915516 command_runner.go:130] > # add_inheritable_capabilities = false
	I1222 00:26:59.523476 1915516 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1222 00:26:59.523482 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523486 1915516 command_runner.go:130] > default_sysctls = [
	I1222 00:26:59.523491 1915516 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1222 00:26:59.523494 1915516 command_runner.go:130] > ]
	I1222 00:26:59.523499 1915516 command_runner.go:130] > # List of devices on the host that a
	I1222 00:26:59.523506 1915516 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1222 00:26:59.523509 1915516 command_runner.go:130] > # allowed_devices = [
	I1222 00:26:59.523513 1915516 command_runner.go:130] > # 	"/dev/fuse",
	I1222 00:26:59.523517 1915516 command_runner.go:130] > # 	"/dev/net/tun",
	I1222 00:26:59.523520 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523525 1915516 command_runner.go:130] > # List of additional devices. specified as
	I1222 00:26:59.523533 1915516 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1222 00:26:59.523538 1915516 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1222 00:26:59.523544 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523548 1915516 command_runner.go:130] > # additional_devices = [
	I1222 00:26:59.523551 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523557 1915516 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1222 00:26:59.523561 1915516 command_runner.go:130] > # cdi_spec_dirs = [
	I1222 00:26:59.523565 1915516 command_runner.go:130] > # 	"/etc/cdi",
	I1222 00:26:59.523569 1915516 command_runner.go:130] > # 	"/var/run/cdi",
	I1222 00:26:59.523574 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523581 1915516 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1222 00:26:59.523588 1915516 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1222 00:26:59.523591 1915516 command_runner.go:130] > # Defaults to false.
	I1222 00:26:59.523596 1915516 command_runner.go:130] > # device_ownership_from_security_context = false
	I1222 00:26:59.523603 1915516 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1222 00:26:59.523609 1915516 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1222 00:26:59.523612 1915516 command_runner.go:130] > # hooks_dir = [
	I1222 00:26:59.523617 1915516 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1222 00:26:59.523622 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523628 1915516 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1222 00:26:59.523635 1915516 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1222 00:26:59.523642 1915516 command_runner.go:130] > # its default mounts from the following two files:
	I1222 00:26:59.523644 1915516 command_runner.go:130] > #
	I1222 00:26:59.523651 1915516 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1222 00:26:59.523658 1915516 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1222 00:26:59.523663 1915516 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1222 00:26:59.523666 1915516 command_runner.go:130] > #
	I1222 00:26:59.523672 1915516 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1222 00:26:59.523679 1915516 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1222 00:26:59.523686 1915516 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1222 00:26:59.523692 1915516 command_runner.go:130] > #      only add mounts it finds in this file.
	I1222 00:26:59.523695 1915516 command_runner.go:130] > #
	I1222 00:26:59.523700 1915516 command_runner.go:130] > # default_mounts_file = ""
	I1222 00:26:59.523706 1915516 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1222 00:26:59.523713 1915516 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1222 00:26:59.523716 1915516 command_runner.go:130] > # pids_limit = -1
	I1222 00:26:59.523723 1915516 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1222 00:26:59.523729 1915516 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1222 00:26:59.523735 1915516 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1222 00:26:59.523744 1915516 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1222 00:26:59.523748 1915516 command_runner.go:130] > # log_size_max = -1
	I1222 00:26:59.523755 1915516 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1222 00:26:59.523759 1915516 command_runner.go:130] > # log_to_journald = false
	I1222 00:26:59.523765 1915516 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1222 00:26:59.523770 1915516 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1222 00:26:59.523775 1915516 command_runner.go:130] > # Path to directory for container attach sockets.
	I1222 00:26:59.523779 1915516 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1222 00:26:59.523784 1915516 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1222 00:26:59.523788 1915516 command_runner.go:130] > # bind_mount_prefix = ""
	I1222 00:26:59.523794 1915516 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1222 00:26:59.523798 1915516 command_runner.go:130] > # read_only = false
	I1222 00:26:59.523804 1915516 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1222 00:26:59.523810 1915516 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1222 00:26:59.523815 1915516 command_runner.go:130] > # live configuration reload.
	I1222 00:26:59.523818 1915516 command_runner.go:130] > # log_level = "info"
	I1222 00:26:59.523825 1915516 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1222 00:26:59.523830 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523834 1915516 command_runner.go:130] > # log_filter = ""
	I1222 00:26:59.523841 1915516 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523847 1915516 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1222 00:26:59.523850 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523858 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523862 1915516 command_runner.go:130] > # uid_mappings = ""
	I1222 00:26:59.523868 1915516 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523874 1915516 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1222 00:26:59.523879 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523887 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523891 1915516 command_runner.go:130] > # gid_mappings = ""
	I1222 00:26:59.523897 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1222 00:26:59.523903 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523909 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523917 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523921 1915516 command_runner.go:130] > # minimum_mappable_uid = -1
	I1222 00:26:59.523928 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1222 00:26:59.523934 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523940 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523948 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523952 1915516 command_runner.go:130] > # minimum_mappable_gid = -1
	I1222 00:26:59.523958 1915516 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1222 00:26:59.523964 1915516 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1222 00:26:59.523970 1915516 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1222 00:26:59.523975 1915516 command_runner.go:130] > # ctr_stop_timeout = 30
	I1222 00:26:59.523981 1915516 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1222 00:26:59.523987 1915516 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1222 00:26:59.523992 1915516 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1222 00:26:59.523997 1915516 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1222 00:26:59.524001 1915516 command_runner.go:130] > # drop_infra_ctr = true
	I1222 00:26:59.524011 1915516 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1222 00:26:59.524017 1915516 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1222 00:26:59.524025 1915516 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1222 00:26:59.524033 1915516 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1222 00:26:59.524041 1915516 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1222 00:26:59.524048 1915516 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1222 00:26:59.524053 1915516 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1222 00:26:59.524059 1915516 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1222 00:26:59.524063 1915516 command_runner.go:130] > # shared_cpuset = ""
	I1222 00:26:59.524068 1915516 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1222 00:26:59.524074 1915516 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1222 00:26:59.524078 1915516 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1222 00:26:59.524086 1915516 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1222 00:26:59.524090 1915516 command_runner.go:130] > # pinns_path = ""
	I1222 00:26:59.524095 1915516 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1222 00:26:59.524101 1915516 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1222 00:26:59.524106 1915516 command_runner.go:130] > # enable_criu_support = true
	I1222 00:26:59.524111 1915516 command_runner.go:130] > # Enable/disable the generation of the container,
	I1222 00:26:59.524117 1915516 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1222 00:26:59.524121 1915516 command_runner.go:130] > # enable_pod_events = false
	I1222 00:26:59.524128 1915516 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1222 00:26:59.524133 1915516 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1222 00:26:59.524137 1915516 command_runner.go:130] > # default_runtime = "crun"
	I1222 00:26:59.524142 1915516 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1222 00:26:59.524150 1915516 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1222 00:26:59.524159 1915516 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1222 00:26:59.524164 1915516 command_runner.go:130] > # creation as a file is not desired either.
	I1222 00:26:59.524173 1915516 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1222 00:26:59.524178 1915516 command_runner.go:130] > # the hostname is being managed dynamically.
	I1222 00:26:59.524182 1915516 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1222 00:26:59.524185 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.524191 1915516 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1222 00:26:59.524198 1915516 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1222 00:26:59.524204 1915516 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1222 00:26:59.524209 1915516 command_runner.go:130] > # Each entry in the table should follow the format:
	I1222 00:26:59.524212 1915516 command_runner.go:130] > #
	I1222 00:26:59.524217 1915516 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1222 00:26:59.524221 1915516 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1222 00:26:59.524226 1915516 command_runner.go:130] > # runtime_type = "oci"
	I1222 00:26:59.524230 1915516 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1222 00:26:59.524236 1915516 command_runner.go:130] > # inherit_default_runtime = false
	I1222 00:26:59.524246 1915516 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1222 00:26:59.524250 1915516 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1222 00:26:59.524255 1915516 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1222 00:26:59.524259 1915516 command_runner.go:130] > # monitor_env = []
	I1222 00:26:59.524264 1915516 command_runner.go:130] > # privileged_without_host_devices = false
	I1222 00:26:59.524268 1915516 command_runner.go:130] > # allowed_annotations = []
	I1222 00:26:59.524274 1915516 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1222 00:26:59.524277 1915516 command_runner.go:130] > # no_sync_log = false
	I1222 00:26:59.524281 1915516 command_runner.go:130] > # default_annotations = {}
	I1222 00:26:59.524285 1915516 command_runner.go:130] > # stream_websockets = false
	I1222 00:26:59.524289 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.524337 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.524343 1915516 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1222 00:26:59.524349 1915516 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1222 00:26:59.524356 1915516 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1222 00:26:59.524362 1915516 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1222 00:26:59.524365 1915516 command_runner.go:130] > #   in $PATH.
	I1222 00:26:59.524371 1915516 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1222 00:26:59.524376 1915516 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1222 00:26:59.524382 1915516 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1222 00:26:59.524386 1915516 command_runner.go:130] > #   state.
	I1222 00:26:59.524392 1915516 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1222 00:26:59.524399 1915516 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1222 00:26:59.524405 1915516 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1222 00:26:59.524411 1915516 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1222 00:26:59.524417 1915516 command_runner.go:130] > #   the values from the default runtime on load time.
	I1222 00:26:59.524423 1915516 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1222 00:26:59.524429 1915516 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1222 00:26:59.524435 1915516 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1222 00:26:59.524441 1915516 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1222 00:26:59.524446 1915516 command_runner.go:130] > #   The currently recognized values are:
	I1222 00:26:59.524453 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1222 00:26:59.524460 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1222 00:26:59.524466 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1222 00:26:59.524472 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1222 00:26:59.524479 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1222 00:26:59.524489 1915516 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1222 00:26:59.524496 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1222 00:26:59.524502 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1222 00:26:59.524508 1915516 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1222 00:26:59.524514 1915516 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1222 00:26:59.524521 1915516 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1222 00:26:59.524528 1915516 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1222 00:26:59.524534 1915516 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1222 00:26:59.524540 1915516 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1222 00:26:59.524546 1915516 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1222 00:26:59.524554 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1222 00:26:59.524560 1915516 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1222 00:26:59.524564 1915516 command_runner.go:130] > #   deprecated option "conmon".
	I1222 00:26:59.524571 1915516 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1222 00:26:59.524576 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1222 00:26:59.524583 1915516 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1222 00:26:59.524587 1915516 command_runner.go:130] > #   should be moved to the container's cgroup
	I1222 00:26:59.524594 1915516 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1222 00:26:59.524599 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1222 00:26:59.524606 1915516 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1222 00:26:59.524610 1915516 command_runner.go:130] > #   conmon-rs by using:
	I1222 00:26:59.524617 1915516 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1222 00:26:59.524625 1915516 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1222 00:26:59.524633 1915516 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1222 00:26:59.524640 1915516 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1222 00:26:59.524645 1915516 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1222 00:26:59.524652 1915516 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1222 00:26:59.524660 1915516 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1222 00:26:59.524665 1915516 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1222 00:26:59.524673 1915516 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1222 00:26:59.524683 1915516 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1222 00:26:59.524688 1915516 command_runner.go:130] > #   when a machine crash happens.
	I1222 00:26:59.524695 1915516 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1222 00:26:59.524703 1915516 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1222 00:26:59.524711 1915516 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1222 00:26:59.524715 1915516 command_runner.go:130] > #   seccomp profile for the runtime.
	I1222 00:26:59.524721 1915516 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1222 00:26:59.524729 1915516 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1222 00:26:59.524732 1915516 command_runner.go:130] > #
	I1222 00:26:59.524737 1915516 command_runner.go:130] > # Using the seccomp notifier feature:
	I1222 00:26:59.524739 1915516 command_runner.go:130] > #
	I1222 00:26:59.524745 1915516 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1222 00:26:59.524752 1915516 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1222 00:26:59.524755 1915516 command_runner.go:130] > #
	I1222 00:26:59.524761 1915516 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1222 00:26:59.524768 1915516 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1222 00:26:59.524770 1915516 command_runner.go:130] > #
	I1222 00:26:59.524776 1915516 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1222 00:26:59.524780 1915516 command_runner.go:130] > # feature.
	I1222 00:26:59.524783 1915516 command_runner.go:130] > #
	I1222 00:26:59.524789 1915516 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1222 00:26:59.524795 1915516 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1222 00:26:59.524801 1915516 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1222 00:26:59.524807 1915516 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1222 00:26:59.524813 1915516 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1222 00:26:59.524815 1915516 command_runner.go:130] > #
	I1222 00:26:59.524834 1915516 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1222 00:26:59.524841 1915516 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1222 00:26:59.524844 1915516 command_runner.go:130] > #
	I1222 00:26:59.524855 1915516 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1222 00:26:59.524861 1915516 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1222 00:26:59.524865 1915516 command_runner.go:130] > #
	I1222 00:26:59.524871 1915516 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1222 00:26:59.524877 1915516 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1222 00:26:59.524880 1915516 command_runner.go:130] > # limitation.
	I1222 00:26:59.524884 1915516 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1222 00:26:59.524890 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1222 00:26:59.524894 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524900 1915516 command_runner.go:130] > runtime_root = "/run/crun"
	I1222 00:26:59.524904 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524908 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524912 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524917 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524921 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524924 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524931 1915516 command_runner.go:130] > allowed_annotations = [
	I1222 00:26:59.524936 1915516 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1222 00:26:59.524939 1915516 command_runner.go:130] > ]
	I1222 00:26:59.524944 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524948 1915516 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1222 00:26:59.524952 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1222 00:26:59.524955 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524959 1915516 command_runner.go:130] > runtime_root = "/run/runc"
	I1222 00:26:59.524964 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524967 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524972 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524976 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524980 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524984 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524988 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524995 1915516 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1222 00:26:59.525000 1915516 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1222 00:26:59.525007 1915516 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1222 00:26:59.525015 1915516 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1222 00:26:59.525027 1915516 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1222 00:26:59.525037 1915516 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1222 00:26:59.525044 1915516 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1222 00:26:59.525050 1915516 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1222 00:26:59.525059 1915516 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1222 00:26:59.525068 1915516 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1222 00:26:59.525074 1915516 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1222 00:26:59.525081 1915516 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1222 00:26:59.525085 1915516 command_runner.go:130] > # Example:
	I1222 00:26:59.525089 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1222 00:26:59.525095 1915516 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1222 00:26:59.525100 1915516 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1222 00:26:59.525108 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1222 00:26:59.525111 1915516 command_runner.go:130] > # cpuset = "0-1"
	I1222 00:26:59.525115 1915516 command_runner.go:130] > # cpushares = "5"
	I1222 00:26:59.525119 1915516 command_runner.go:130] > # cpuquota = "1000"
	I1222 00:26:59.525122 1915516 command_runner.go:130] > # cpuperiod = "100000"
	I1222 00:26:59.525126 1915516 command_runner.go:130] > # cpulimit = "35"
	I1222 00:26:59.525129 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.525133 1915516 command_runner.go:130] > # The workload name is workload-type.
	I1222 00:26:59.525141 1915516 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1222 00:26:59.525147 1915516 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1222 00:26:59.525153 1915516 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1222 00:26:59.525161 1915516 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1222 00:26:59.525166 1915516 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1222 00:26:59.525171 1915516 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1222 00:26:59.525178 1915516 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1222 00:26:59.525182 1915516 command_runner.go:130] > # Default value is set to true
	I1222 00:26:59.525187 1915516 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1222 00:26:59.525192 1915516 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1222 00:26:59.525197 1915516 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1222 00:26:59.525201 1915516 command_runner.go:130] > # Default value is set to 'false'
	I1222 00:26:59.525205 1915516 command_runner.go:130] > # disable_hostport_mapping = false
	I1222 00:26:59.525211 1915516 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1222 00:26:59.525219 1915516 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1222 00:26:59.525222 1915516 command_runner.go:130] > # timezone = ""
	I1222 00:26:59.525229 1915516 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1222 00:26:59.525231 1915516 command_runner.go:130] > #
	I1222 00:26:59.525238 1915516 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1222 00:26:59.525244 1915516 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1222 00:26:59.525248 1915516 command_runner.go:130] > [crio.image]
	I1222 00:26:59.525254 1915516 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1222 00:26:59.525258 1915516 command_runner.go:130] > # default_transport = "docker://"
	I1222 00:26:59.525363 1915516 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1222 00:26:59.525377 1915516 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525381 1915516 command_runner.go:130] > # global_auth_file = ""
	I1222 00:26:59.525392 1915516 command_runner.go:130] > # The image used to instantiate infra containers.
	I1222 00:26:59.525398 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525403 1915516 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.525414 1915516 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1222 00:26:59.525425 1915516 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525432 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525436 1915516 command_runner.go:130] > # pause_image_auth_file = ""
	I1222 00:26:59.525442 1915516 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1222 00:26:59.525450 1915516 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1222 00:26:59.525460 1915516 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1222 00:26:59.525466 1915516 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1222 00:26:59.525474 1915516 command_runner.go:130] > # pause_command = "/pause"
	I1222 00:26:59.525480 1915516 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1222 00:26:59.525491 1915516 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1222 00:26:59.525499 1915516 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1222 00:26:59.525509 1915516 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1222 00:26:59.525515 1915516 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1222 00:26:59.525521 1915516 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1222 00:26:59.525527 1915516 command_runner.go:130] > # pinned_images = [
	I1222 00:26:59.525530 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525536 1915516 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1222 00:26:59.525545 1915516 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1222 00:26:59.525557 1915516 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1222 00:26:59.525563 1915516 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1222 00:26:59.525572 1915516 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1222 00:26:59.525578 1915516 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1222 00:26:59.525584 1915516 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1222 00:26:59.525594 1915516 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1222 00:26:59.525601 1915516 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1222 00:26:59.525608 1915516 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1222 00:26:59.525616 1915516 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1222 00:26:59.525621 1915516 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1222 00:26:59.525627 1915516 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1222 00:26:59.525634 1915516 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1222 00:26:59.525642 1915516 command_runner.go:130] > # changing them here.
	I1222 00:26:59.525648 1915516 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1222 00:26:59.525652 1915516 command_runner.go:130] > # insecure_registries = [
	I1222 00:26:59.525659 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525666 1915516 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1222 00:26:59.525676 1915516 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1222 00:26:59.525684 1915516 command_runner.go:130] > # image_volumes = "mkdir"
	I1222 00:26:59.525692 1915516 command_runner.go:130] > # Temporary directory to use for storing big files
	I1222 00:26:59.525697 1915516 command_runner.go:130] > # big_files_temporary_dir = ""
	I1222 00:26:59.525703 1915516 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1222 00:26:59.525711 1915516 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1222 00:26:59.525717 1915516 command_runner.go:130] > # auto_reload_registries = false
	I1222 00:26:59.525724 1915516 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1222 00:26:59.525734 1915516 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1222 00:26:59.525744 1915516 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1222 00:26:59.525748 1915516 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1222 00:26:59.525770 1915516 command_runner.go:130] > # The mode of short name resolution.
	I1222 00:26:59.525780 1915516 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1222 00:26:59.525788 1915516 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1222 00:26:59.525794 1915516 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1222 00:26:59.525800 1915516 command_runner.go:130] > # short_name_mode = "enforcing"
	I1222 00:26:59.525807 1915516 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1222 00:26:59.525813 1915516 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1222 00:26:59.525821 1915516 command_runner.go:130] > # oci_artifact_mount_support = true
	I1222 00:26:59.525829 1915516 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1222 00:26:59.525836 1915516 command_runner.go:130] > # CNI plugins.
	I1222 00:26:59.525843 1915516 command_runner.go:130] > [crio.network]
	I1222 00:26:59.525850 1915516 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1222 00:26:59.525859 1915516 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1222 00:26:59.525863 1915516 command_runner.go:130] > # cni_default_network = ""
	I1222 00:26:59.525869 1915516 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1222 00:26:59.525877 1915516 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1222 00:26:59.525883 1915516 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1222 00:26:59.525887 1915516 command_runner.go:130] > # plugin_dirs = [
	I1222 00:26:59.525891 1915516 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1222 00:26:59.525897 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525903 1915516 command_runner.go:130] > # List of included pod metrics.
	I1222 00:26:59.525908 1915516 command_runner.go:130] > # included_pod_metrics = [
	I1222 00:26:59.525913 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525920 1915516 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1222 00:26:59.525926 1915516 command_runner.go:130] > [crio.metrics]
	I1222 00:26:59.525932 1915516 command_runner.go:130] > # Globally enable or disable metrics support.
	I1222 00:26:59.525936 1915516 command_runner.go:130] > # enable_metrics = false
	I1222 00:26:59.525945 1915516 command_runner.go:130] > # Specify enabled metrics collectors.
	I1222 00:26:59.525950 1915516 command_runner.go:130] > # Per default all metrics are enabled.
	I1222 00:26:59.525957 1915516 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1222 00:26:59.525969 1915516 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1222 00:26:59.525976 1915516 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1222 00:26:59.525981 1915516 command_runner.go:130] > # metrics_collectors = [
	I1222 00:26:59.525987 1915516 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1222 00:26:59.525992 1915516 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1222 00:26:59.525998 1915516 command_runner.go:130] > # 	"containers_oom_total",
	I1222 00:26:59.526006 1915516 command_runner.go:130] > # 	"processes_defunct",
	I1222 00:26:59.526010 1915516 command_runner.go:130] > # 	"operations_total",
	I1222 00:26:59.526014 1915516 command_runner.go:130] > # 	"operations_latency_seconds",
	I1222 00:26:59.526019 1915516 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1222 00:26:59.526023 1915516 command_runner.go:130] > # 	"operations_errors_total",
	I1222 00:26:59.526028 1915516 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1222 00:26:59.526033 1915516 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1222 00:26:59.526037 1915516 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1222 00:26:59.526041 1915516 command_runner.go:130] > # 	"image_pulls_success_total",
	I1222 00:26:59.526045 1915516 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1222 00:26:59.526050 1915516 command_runner.go:130] > # 	"containers_oom_count_total",
	I1222 00:26:59.526055 1915516 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1222 00:26:59.526059 1915516 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1222 00:26:59.526064 1915516 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1222 00:26:59.526066 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526080 1915516 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1222 00:26:59.526084 1915516 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1222 00:26:59.526089 1915516 command_runner.go:130] > # The port on which the metrics server will listen.
	I1222 00:26:59.526093 1915516 command_runner.go:130] > # metrics_port = 9090
	I1222 00:26:59.526098 1915516 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1222 00:26:59.526102 1915516 command_runner.go:130] > # metrics_socket = ""
	I1222 00:26:59.526107 1915516 command_runner.go:130] > # The certificate for the secure metrics server.
	I1222 00:26:59.526118 1915516 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1222 00:26:59.526125 1915516 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1222 00:26:59.526130 1915516 command_runner.go:130] > # certificate on any modification event.
	I1222 00:26:59.526134 1915516 command_runner.go:130] > # metrics_cert = ""
	I1222 00:26:59.526139 1915516 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1222 00:26:59.526148 1915516 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1222 00:26:59.526151 1915516 command_runner.go:130] > # metrics_key = ""
	I1222 00:26:59.526157 1915516 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1222 00:26:59.526163 1915516 command_runner.go:130] > [crio.tracing]
	I1222 00:26:59.526169 1915516 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1222 00:26:59.526177 1915516 command_runner.go:130] > # enable_tracing = false
	I1222 00:26:59.526182 1915516 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1222 00:26:59.526187 1915516 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1222 00:26:59.526197 1915516 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1222 00:26:59.526205 1915516 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1222 00:26:59.526213 1915516 command_runner.go:130] > # CRI-O NRI configuration.
	I1222 00:26:59.526218 1915516 command_runner.go:130] > [crio.nri]
	I1222 00:26:59.526222 1915516 command_runner.go:130] > # Globally enable or disable NRI.
	I1222 00:26:59.526226 1915516 command_runner.go:130] > # enable_nri = true
	I1222 00:26:59.526230 1915516 command_runner.go:130] > # NRI socket to listen on.
	I1222 00:26:59.526238 1915516 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1222 00:26:59.526242 1915516 command_runner.go:130] > # NRI plugin directory to use.
	I1222 00:26:59.526247 1915516 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1222 00:26:59.526255 1915516 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1222 00:26:59.526264 1915516 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1222 00:26:59.526270 1915516 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1222 00:26:59.526331 1915516 command_runner.go:130] > # nri_disable_connections = false
	I1222 00:26:59.526342 1915516 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1222 00:26:59.526347 1915516 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1222 00:26:59.526352 1915516 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1222 00:26:59.526359 1915516 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1222 00:26:59.526364 1915516 command_runner.go:130] > # NRI default validator configuration.
	I1222 00:26:59.526374 1915516 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1222 00:26:59.526381 1915516 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1222 00:26:59.526389 1915516 command_runner.go:130] > # can be restricted/rejected:
	I1222 00:26:59.526393 1915516 command_runner.go:130] > # - OCI hook injection
	I1222 00:26:59.526399 1915516 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1222 00:26:59.526406 1915516 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1222 00:26:59.526411 1915516 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1222 00:26:59.526416 1915516 command_runner.go:130] > # - adjustment of linux namespaces
	I1222 00:26:59.526425 1915516 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1222 00:26:59.526435 1915516 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1222 00:26:59.526442 1915516 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1222 00:26:59.526448 1915516 command_runner.go:130] > #
	I1222 00:26:59.526452 1915516 command_runner.go:130] > # [crio.nri.default_validator]
	I1222 00:26:59.526457 1915516 command_runner.go:130] > # nri_enable_default_validator = false
	I1222 00:26:59.526467 1915516 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1222 00:26:59.526474 1915516 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1222 00:26:59.526485 1915516 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1222 00:26:59.526492 1915516 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1222 00:26:59.526500 1915516 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1222 00:26:59.526505 1915516 command_runner.go:130] > # nri_validator_required_plugins = [
	I1222 00:26:59.526510 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526516 1915516 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1222 00:26:59.526525 1915516 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1222 00:26:59.526530 1915516 command_runner.go:130] > [crio.stats]
	I1222 00:26:59.526543 1915516 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1222 00:26:59.526549 1915516 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1222 00:26:59.526556 1915516 command_runner.go:130] > # stats_collection_period = 0
	I1222 00:26:59.526563 1915516 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1222 00:26:59.526573 1915516 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1222 00:26:59.526577 1915516 command_runner.go:130] > # collection_period = 0
	I1222 00:26:59.526772 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.4923315Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1222 00:26:59.526793 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492368561Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1222 00:26:59.526804 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492394456Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1222 00:26:59.526811 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492418381Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1222 00:26:59.526829 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.49249673Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:59.526841 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492972301Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1222 00:26:59.526852 1915516 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1222 00:26:59.526929 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:59.526945 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:59.526969 1915516 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:26:59.526995 1915516 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:26:59.527121 1915516 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:26:59.527200 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:26:59.535433 1915516 command_runner.go:130] > kubeadm
	I1222 00:26:59.535512 1915516 command_runner.go:130] > kubectl
	I1222 00:26:59.535531 1915516 command_runner.go:130] > kubelet
	I1222 00:26:59.535576 1915516 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:26:59.535650 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:26:59.543039 1915516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:26:59.555537 1915516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:26:59.568459 1915516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 00:26:59.580757 1915516 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:26:59.584332 1915516 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:26:59.584456 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.724165 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:00.346235 1915516 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:27:00.346307 1915516 certs.go:195] generating shared ca certs ...
	I1222 00:27:00.346339 1915516 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:00.346533 1915516 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:27:00.346648 1915516 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:27:00.346710 1915516 certs.go:257] generating profile certs ...
	I1222 00:27:00.346876 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:27:00.347136 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:27:00.347262 1915516 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:27:00.347310 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:27:00.347347 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:27:00.347391 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:27:00.347455 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:27:00.347514 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:27:00.347563 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:27:00.347599 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:27:00.347655 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:27:00.347758 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:27:00.347824 1915516 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:27:00.347874 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:27:00.347933 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:27:00.347992 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:27:00.348063 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:27:00.348147 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:27:00.348210 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem -> /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.348254 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.348294 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.349116 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:27:00.373997 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:27:00.395587 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:27:00.415350 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:27:00.433576 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:27:00.451786 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:27:00.469999 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:27:00.488347 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:27:00.506732 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:27:00.524546 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:27:00.542404 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:27:00.559977 1915516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:27:00.572924 1915516 ssh_runner.go:195] Run: openssl version
	I1222 00:27:00.579133 1915516 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:27:00.579568 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.587905 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:27:00.596398 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600098 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600361 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600432 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.641247 1915516 command_runner.go:130] > 51391683
	I1222 00:27:00.641737 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:27:00.649412 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.657021 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:27:00.664905 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668819 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668897 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668950 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.710718 1915516 command_runner.go:130] > 3ec20f2e
	I1222 00:27:00.711212 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:27:00.718755 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.725802 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:27:00.733002 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736446 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736472 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736534 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.776933 1915516 command_runner.go:130] > b5213941
	I1222 00:27:00.777458 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:27:00.784943 1915516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788657 1915516 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788682 1915516 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:27:00.788688 1915516 command_runner.go:130] > Device: 259,1	Inode: 1585688     Links: 1
	I1222 00:27:00.788695 1915516 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:27:00.788729 1915516 command_runner.go:130] > Access: 2025-12-22 00:22:53.052243967 +0000
	I1222 00:27:00.788738 1915516 command_runner.go:130] > Modify: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788744 1915516 command_runner.go:130] > Change: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788752 1915516 command_runner.go:130] >  Birth: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788843 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:27:00.834300 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.834770 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:27:00.875032 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.875494 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:27:00.916072 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.916544 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:27:00.957168 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.957608 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:27:01.005250 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.005855 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:27:01.046783 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.047595 1915516 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:27:01.047694 1915516 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:27:01.047794 1915516 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:27:01.074422 1915516 cri.go:96] found id: ""
	I1222 00:27:01.074566 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:27:01.081706 1915516 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:27:01.081729 1915516 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:27:01.081736 1915516 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:27:01.082711 1915516 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:27:01.082752 1915516 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:27:01.082809 1915516 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:27:01.090505 1915516 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:27:01.090899 1915516 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-837733" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091003 1915516 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "functional-837733" cluster setting kubeconfig missing "functional-837733" context setting]
	I1222 00:27:01.091279 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.091694 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091853 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.092420 1915516 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:27:01.092441 1915516 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:27:01.092469 1915516 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:27:01.092508 1915516 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:27:01.092519 1915516 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:27:01.092524 1915516 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:27:01.092821 1915516 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:27:01.103318 1915516 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:27:01.103353 1915516 kubeadm.go:602] duration metric: took 20.58853ms to restartPrimaryControlPlane
	I1222 00:27:01.103366 1915516 kubeadm.go:403] duration metric: took 55.780111ms to StartCluster
	I1222 00:27:01.103381 1915516 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.103449 1915516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.104102 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.104313 1915516 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:27:01.104706 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:27:01.104752 1915516 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:27:01.104888 1915516 addons.go:70] Setting storage-provisioner=true in profile "functional-837733"
	I1222 00:27:01.104908 1915516 addons.go:239] Setting addon storage-provisioner=true in "functional-837733"
	I1222 00:27:01.104931 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.105709 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.105852 1915516 addons.go:70] Setting default-storageclass=true in profile "functional-837733"
	I1222 00:27:01.105876 1915516 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-837733"
	I1222 00:27:01.106124 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.111206 1915516 out.go:179] * Verifying Kubernetes components...
	I1222 00:27:01.114015 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:27:01.149988 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.150406 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.150740 1915516 addons.go:239] Setting addon default-storageclass=true in "functional-837733"
	I1222 00:27:01.150781 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.151633 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.160017 1915516 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:27:01.163813 1915516 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.163840 1915516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:27:01.163933 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.208933 1915516 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:01.208958 1915516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:27:01.209032 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.220299 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.259895 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.324483 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:01.352138 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.392769 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.095697 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.095745 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095787 1915516 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095683 1915516 node_ready.go:35] waiting up to 6m0s for node "functional-837733" to be "Ready" ...
	I1222 00:27:02.095893 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.095958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.096162 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.096189 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.096261 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.229628 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.250171 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.299403 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.303146 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.325073 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.325129 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.596516 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.596602 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.596960 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.700363 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.757664 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.757704 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.778883 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.845712 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.845792 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.096069 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.096495 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.596210 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.596328 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.596586 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.599774 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:03.608220 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:03.704291 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.704413 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.724087 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.724140 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.096645 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.096746 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.097060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.097110 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:04.395547 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:04.461482 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.461526 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.596939 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.597016 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.597354 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.908718 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:04.967177 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.967217 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:05.096456 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.096540 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.096895 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.596711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.096915 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.096990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:06.097365 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:06.122623 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:06.180609 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.181800 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 00:27:06.185073 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.244148 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:06.244195 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.596862 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.596990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.597266 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.095976 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.096054 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.596061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.596391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.650649 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:07.682081 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:07.723708 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.723755 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:07.756657 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.756702 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:08.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.096407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.596459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:08.596512 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:09.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.500982 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:09.557414 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:09.557460 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:09.596618 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.596687 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.596949 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.096786 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.096870 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.407596 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:10.466808 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:10.466847 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:10.596231 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.596316 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:10.596666 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:11.096583 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.096658 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.596951 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.597033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.597361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.095930 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.096006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.096359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.596103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.096064 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.096469 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:13.110706 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:13.162639 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:13.165994 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:13.596604 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.597025 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.096902 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.097221 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.596331 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.007048 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:15.075847 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:15.075897 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:15.095981 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.096063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.596111 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.596199 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.596504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.096151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.596451 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.596873 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.096633 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.096711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.097034 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.256498 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:17.320974 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:17.324801 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:17.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.596544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:17.596599 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.096285 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.096359 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.096666 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.596351 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.596417 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.596709 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.596186 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.596288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.596608 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:19.596658 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:20.095993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.596149 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.096326 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.096409 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.096724 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.596667 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.596736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.597043 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:21.597093 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:22.096478 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.096951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.596629 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.596721 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.597082 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.096795 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.096876 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.097130 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.596917 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.596995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.597294 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.941866 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:23.998448 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:23.998497 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:23.998521 1915516 retry.go:84] will retry after 7.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:24.096775 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.096858 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.097179 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.596921 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.596997 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.597304 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.096528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.596241 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.596313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.596669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.096357 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.096423 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.096702 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:26.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.596861 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.597278 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.096413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.596520 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:28.596577 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:29.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.096085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.096330 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.596121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.096205 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.096297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.096674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.321067 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:30.379433 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:30.379480 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.379505 1915516 retry.go:84] will retry after 9.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.596804 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.596878 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.597200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.597254 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:31.096701 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.096780 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.097109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.231597 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:31.293911 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:31.293951 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:31.596615 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.096660 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.596967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.597283 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.597352 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:33.096960 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.097032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.097391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.596091 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.596164 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.596230 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.596325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.596664 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.096343 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.096412 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.096685 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:35.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.096033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.596412 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.596479 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.596727 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.096555 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.096628 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.096973 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:37.097038 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.596788 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.596867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.095946 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.096029 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.096335 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.596466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.096281 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.096646 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.596430 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:40.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.096154 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.098739 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:40.155643 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:40.159357 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.159395 1915516 retry.go:84] will retry after 15s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.096329 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.096400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.096688 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.596638 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.596757 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.597079 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.597130 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:42.096986 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.097069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.101091 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1222 00:27:42.596907 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.596992 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.597277 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.596165 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.096105 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.096430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:44.096472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.596127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.096194 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.096288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.596379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.096150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:46.096534 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.458265 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:46.518675 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:46.518729 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.518755 1915516 retry.go:84] will retry after 28.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.596931 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.597020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.597346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.096075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.596079 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.096276 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.096625 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.096740 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.596414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.596137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.096214 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.096306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.596061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.596464 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.596521 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:51.096372 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.096448 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.596622 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.596716 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.096857 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.096927 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.097248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.096378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:53.096448 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.596524 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.096119 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.096565 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.596116 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.096066 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.096178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:55.096580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.200777 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:55.265479 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:55.265520 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.265546 1915516 retry.go:84] will retry after 39.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.595965 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.596049 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.596359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.095950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.096027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.096297 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.596381 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.596461 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.596751 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.096601 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.096888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.096932 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.596637 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.596710 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.596964 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.096723 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.096792 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.097169 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.596957 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.597032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.597316 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.596476 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:00.096143 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.096238 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.596214 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.596306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.596559 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.097205 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.596188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.596261 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.596603 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.596670 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.096310 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.096394 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.096715 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.596419 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.596515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.596851 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.096613 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.096689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.097037 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.596789 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.596864 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.597113 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.597153 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.096886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.097257 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.595963 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.596038 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.596345 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.596020 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.096257 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.096620 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:06.096682 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.596417 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.596491 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.596755 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.596270 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.596617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:08.596475 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:09.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.596234 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.096409 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.096501 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.096849 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:11.096946 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:11.596858 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.596941 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.597260 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.097034 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.097128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.097525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.596078 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.596158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:13.596572 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:14.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.096071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.678913 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:14.737274 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:14.740868 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:14.740903 1915516 retry.go:84] will retry after 44.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:15.096479 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.096556 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.096856 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.596004 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.596400 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.096510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.096575 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:16.596327 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.596405 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.096642 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.096912 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.596695 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.596773 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.597104 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.096925 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.097003 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.097359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:18.097417 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:18.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.596361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.096435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.596211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.596552 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.096244 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.596453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:20.596506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:21.096297 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.096367 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.096669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:21.596745 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.597117 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.096947 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.097021 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.097379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.596086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.596159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:22.596545 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:23.096015 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.096324 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:23.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.596150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.096052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.096138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.596263 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.596536 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:24.596589 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:25.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:25.596179 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.596271 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.596578 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.096014 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.096355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.596544 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.596874 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:26.596933 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:27.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.097066 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:27.596687 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.596753 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.597008 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.096768 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.096839 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:29.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.096076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.096374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:29.096425 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:29.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.596557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.096284 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.096366 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.096689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.596008 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.596351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:31.096276 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.096353 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.096679 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:31.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:31.596743 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.596816 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.096778 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.096851 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.097100 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.596895 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.596983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.597299 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.096044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.596138 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.596502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:33.596552 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:34.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.096436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:34.454097 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:28:34.514142 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514187 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514269 1915516 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:34.596441 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.596529 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.596855 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.096655 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.096737 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.097056 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.596959 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.597310 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:35.597390 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:36.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.096484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:36.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.596568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.096447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:38.096490 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:38.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.096163 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.596431 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:40.096080 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.096167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:40.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:40.596326 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.596447 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.596781 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.096558 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.096630 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.096876 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.596824 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.597258 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:42.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.096146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.097461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1222 00:28:42.097586 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:42.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.096540 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.596134 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.596546 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.596128 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.596215 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.596558 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:44.596612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:45.096183 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.096278 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.096726 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:45.596446 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.596802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.096553 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.096624 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.096959 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.596832 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:46.597275 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:47.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.095989 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.096242 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:47.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.596082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.596368 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.096094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.096173 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:49.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.096118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.096389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:49.096433 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:49.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.596170 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.596486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.096103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.096376 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:51.096359 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.096467 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.096794 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:51.096869 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:51.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.596678 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.596972 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.096756 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.096859 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.097229 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.595971 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.596373 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.096394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.596060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:53.596547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:54.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.096525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:54.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.596398 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.596109 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.596510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:56.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.096414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:56.096453 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:56.596382 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.596455 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.596777 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.096597 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.096690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.097021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.596765 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.596835 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.597084 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:58.096904 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.096994 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.097340 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:58.097391 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:58.596044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.596153 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.061108 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:59.096537 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.096605 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.096887 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.132532 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138577 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138673 1915516 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:59.141746 1915516 out.go:179] * Enabled addons: 
	I1222 00:28:59.145405 1915516 addons.go:530] duration metric: took 1m58.040642899s for enable addons: enabled=[]
	I1222 00:28:59.596293 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.596374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.596770 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.096670 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.097201 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.595928 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.596014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:00.596370 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:01.096378 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.096452 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.096753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:01.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.596676 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.596994 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.096700 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.096774 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.097108 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.596912 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.596985 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.597314 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:02.597388 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:03.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:03.596014 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.596079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.096091 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.596416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:05.095995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:05.096466 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:05.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.096196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.096324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.096662 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.596476 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.596558 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.596812 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:07.096580 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.096655 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.096995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:07.097052 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:07.596669 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.597087 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.096888 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.096974 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.595949 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.596080 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.095985 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.096064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.096372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.596027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.596300 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:09.596339 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:10.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.096125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.096502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:10.596205 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.596303 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.596585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.096568 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.096641 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.096925 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.596903 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.596980 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.597315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:11.597389 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:12.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.096485 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:12.595972 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.596042 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.596348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.596162 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.596244 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.596528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:14.096197 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:14.096594 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:14.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.596443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.096585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.596256 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.596324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.596577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.596455 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.596531 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.596865 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:16.596917 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.096729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:17.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.596427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.596236 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.596315 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.596584 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:19.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:19.096504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:19.596189 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.596268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.596595 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.096286 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.096362 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:21.096400 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.096769 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:21.096815 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:21.596733 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.597125 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.096830 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.096913 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.097244 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.596138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.596522 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:23.596485 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:24.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.096081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:24.596151 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.596506 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.096466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.596494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:25.596557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:26.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.096391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:26.596314 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.596400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.596729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.096554 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.096633 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.097006 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.596775 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.596844 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:27.597137 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:28.096938 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.097014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.097401 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:28.595911 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.595983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.596321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.095997 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.096068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.096361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:30.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.096489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:30.096547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:30.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.596327 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.096344 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.096420 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.596746 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.596817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.597107 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:32.096894 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.096967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:32.097281 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:32.596869 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.596969 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.597275 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.096111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.596315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.096102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.096423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.596130 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:34.596568 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:35.096117 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:35.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.596074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.596449 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.596524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.596791 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:36.596841 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:37.096628 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.096714 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.097074 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:37.596870 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.596949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.597264 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.096987 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.097079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.097412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.596255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:39.096335 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.096415 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.096735 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:39.096791 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:39.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.596081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.596386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.096071 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.096512 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.596469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.096427 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.096493 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.096765 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.596776 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.596860 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.597207 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:41.597262 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:42.097196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.097290 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.097753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:42.596494 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.596559 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.596818 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.096412 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.096485 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.096813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.596584 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.596660 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.596988 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:44.096629 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.096696 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:44.097007 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:44.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.596896 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.597195 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.097028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.097117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.097583 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.596273 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.596360 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.596616 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.096043 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:46.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:47.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.096065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.096384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:47.596069 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.096477 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.596372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:49.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:49.096506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:49.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.596446 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.096191 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.596046 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:51.096262 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.096340 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.096671 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:51.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:51.596674 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.597007 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.096802 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.096883 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.097238 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.596097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:53.596472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:54.096190 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.096617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:54.595974 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.596313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.096097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.596182 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.596258 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.596563 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:55.596609 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:56.096131 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:56.596450 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.596522 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.596813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:58.096113 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.096201 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:58.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:58.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.096348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:00.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.096548 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:00.096612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:00.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.096631 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.096833 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.596402 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.596486 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.596923 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:02.096741 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.096817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.097099 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:02.097145 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:02.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.596554 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.596891 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.096678 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.096749 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.097049 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.596893 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.597160 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:04.096926 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.097001 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.097385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:04.097441 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:04.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.096136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.596009 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.596350 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:06.596396 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:07.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:07.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.596115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.096165 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:08.596499 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:09.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.096074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:09.596090 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.596516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.096073 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.096161 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:11.096517 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.096604 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:11.097278 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:11.596969 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.597372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.096147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.596282 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.596358 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.096032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.096445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.596479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:13.596523 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:14.096010 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.096415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:14.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.096501 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.596363 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:16.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:16.096639 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:16.596380 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.596453 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.596745 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.096475 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.096802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.596630 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.596731 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.597138 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:18.096998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.097081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.097420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:18.097478 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:18.596096 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.096545 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.096076 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:20.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:21.096445 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.096524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.096840 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:21.596761 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.596832 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.597105 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.096883 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.096964 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.097271 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.596982 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.597056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.597440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:22.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:23.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.596157 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.596226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:25.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:25.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:25.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.596100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.596424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.096120 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.096534 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.596546 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.596888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:27.096591 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.096677 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.097053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:27.097108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:27.596848 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.596955 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.596475 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.096142 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.096209 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:29.596527 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:30.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:30.596200 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.596525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.096447 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.096541 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.596722 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.596793 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.597128 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:31.597181 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:32.096921 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.096995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:32.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.596177 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.596451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:34.096133 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.096208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:34.096604 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:34.596306 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.596398 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.596763 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.096549 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.096621 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.096956 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.596693 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.596770 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:36.096908 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.096988 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.097351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:36.097407 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:36.596345 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.596419 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.596680 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.096197 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.596567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:38.596620 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:39.096295 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.096370 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.596381 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.096092 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.596269 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.596348 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.596667 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:40.596717 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:41.096474 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.096825 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:41.596734 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.596808 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.597164 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.096930 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.097022 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.596367 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:43.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:43.096482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:43.596115 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.596196 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.596500 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.096056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.096329 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.595987 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.596060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:45.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.096182 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.096632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:45.096698 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:45.596155 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.596221 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.096083 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.596090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.096110 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.096527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.596194 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.596265 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.596618 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:47.596676 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:48.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.096139 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:48.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.596328 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.096096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.596175 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.596561 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:50.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.096061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.096371 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:50.096418 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:50.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.596449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.096398 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.096473 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.096792 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.596692 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.596763 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.597029 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:52.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.096954 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.097313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:52.097385 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:52.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.596133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.095986 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.096060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.096450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.596389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:54.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:55.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.096554 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:55.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.596140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.096255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.596413 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.596495 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.597103 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:56.597155 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:57.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.096939 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.097279 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:57.596919 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.596998 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.597248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.096981 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.097060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.595936 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.596020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.596366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:59.096048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:59.096443 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:59.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.596437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.112227 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.112324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.112677 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.596304 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.596376 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.596639 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:01.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.096650 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.096999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:01.097053 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:01.596950 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.597377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.096463 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.096586 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.096933 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.596593 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.596668 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.597021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:03.096676 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.096752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.097161 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:03.097215 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:03.596799 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.596879 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.597210 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.095999 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.596068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.096006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.096369 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.596027 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.596106 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.596440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:05.596494 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:06.096082 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.096460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:06.596324 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.596435 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.596689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.596220 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.596550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:07.596603 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:08.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.096360 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:08.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.096598 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.596123 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.596195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:10.096085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.096519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:10.096576 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:10.596225 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.596297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.096706 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.097010 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.596932 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.597006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.597391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.096467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:12.596356 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:13.096021 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:13.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.096148 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.096479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.596057 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:14.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:15.096068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.096144 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:15.596144 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.596224 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.596509 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.596457 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.596533 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.596848 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:16.596913 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:17.096611 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.096699 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.097003 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:17.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.596727 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.597060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.096680 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.096762 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.097085 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.596689 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.596752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.597053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:18.597096 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:19.096874 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.096949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.097291 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:19.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.096486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.596098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.596435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:21.096417 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.096492 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.096805 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:21.096849 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:21.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.596686 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.596951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.096805 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.096887 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.097262 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.596005 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.596438 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.096077 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.096377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:23.596497 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:24.096207 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.096291 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:24.595992 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.596065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.596365 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.096162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.096470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:25.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:26.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.096259 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:26.596641 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.596715 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.597033 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.096832 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.096920 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.097265 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:28.096050 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:28.096502 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:28.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.095982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.096048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.096296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:30.096147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.096226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.096531 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:30.096584 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:30.595996 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.096270 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.596652 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.596724 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.597083 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:32.096810 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.096900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.097155 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:32.097194 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:32.595938 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.596018 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.596399 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.096514 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.096448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.596176 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.596250 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.596573 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:34.596630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:35.096275 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.096626 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:35.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.596118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.595986 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.596055 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.596358 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:37.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:37.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:37.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.596167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.596423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:39.596461 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:40.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:40.596239 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.596317 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.596658 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.096446 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.096515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.096773 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.596719 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.596796 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.597144 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:41.597198 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:42.095939 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.096035 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:42.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.596396 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.596213 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.596294 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.596619 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:44.096156 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.096508 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:44.096557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.096206 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.096300 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.096704 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.596071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.596317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.096028 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.096449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.596566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:46.596617 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:47.095989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.096301 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:47.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.096107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:49.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.096455 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:49.096507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:49.596031 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.096219 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.596217 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.596292 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.596622 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:51.096688 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.096769 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.097192 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:51.097261 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:51.595908 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.595986 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.596268 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.095979 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.596273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.596678 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.096355 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.096427 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.596491 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:53.596544 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:54.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.096480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:54.596158 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.096042 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.596488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:56.096195 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.096268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:56.096624 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:56.596612 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.597015 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.096867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.097193 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.595906 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.595972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.596218 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:58.097085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.097172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.097567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:58.097619 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:58.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.596241 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.596588 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.096257 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.096325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.096568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.596240 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.596636 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.096222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.096550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.596394 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.596465 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.596730 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:00.596772 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:01.096774 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.096885 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.097366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:01.595973 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.597013 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.096796 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.096863 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.097153 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.596892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.596965 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:02.597411 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:03.095937 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.096024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.096410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:03.596093 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.596163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.596410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.596053 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.596474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:05.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.096107 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:05.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:05.596083 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.596157 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.096062 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.096137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.096505 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.596315 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.596381 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.596649 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:07.096321 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.096392 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:07.096788 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:07.596571 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.596648 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.596948 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.096706 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.096778 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.097061 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.596819 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.596900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.597237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.095966 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.096044 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.596095 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:09.596474 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:10.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:10.596067 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.596141 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.096744 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.096823 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.097172 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.595924 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.596004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.596347 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.096155 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:12.096542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:12.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.596092 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.096106 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:14.596542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:15.096232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.096651 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:15.596333 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.596407 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.596674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.096354 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.096430 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.096766 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.596753 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.596828 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.597170 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:16.597226 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:17.096933 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.097004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.097253 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:17.595951 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.596024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.095977 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.096058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.096393 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.596069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:19.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.096412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:19.096467 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:19.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.096172 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.096518 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.596013 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.596415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:21.096789 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.096865 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.097200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:21.097256 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:21.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.596143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.596421 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.096046 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.096346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:23.596435 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:24.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.096108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:24.596160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.596230 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.596159 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.596240 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.596576 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:25.596629 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:26.096303 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.096374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.096637 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:26.596570 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.596649 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.596995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.096704 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.596727 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.596794 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.597086 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:27.597135 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:28.096901 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.096979 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.097317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:28.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.096139 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.096211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.096453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:30.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.096532 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:30.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:30.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.596179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.096503 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.096583 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.596830 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.597245 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.095958 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.096033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.096334 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.596022 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:32.596482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:33.096166 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.096574 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:33.596105 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.096492 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.596212 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.596289 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.596632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:34.596687 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:35.096345 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.096429 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.096712 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:35.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.596142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.596460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.096237 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.596663 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.596739 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.597059 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:36.597108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:37.096872 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:37.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.596113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:39.096175 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:39.096631 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:39.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.096140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.096474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.596119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:41.096370 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.096433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:41.096759 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:41.596613 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.596690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.596999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.096724 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.096822 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.097289 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.596970 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.597040 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.597359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.096020 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.595988 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.596385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:43.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:44.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:44.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.096209 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.096309 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.096703 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.595999 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.596318 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:46.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:46.096587 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:46.596358 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.596433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.096505 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.096569 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.096863 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.597077 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:48.096892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.096972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:48.097376 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:48.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.096001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.596021 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.596390 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.096418 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.595985 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.596374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:50.596428 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:51.096388 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.096466 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.096827 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:51.596847 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.596923 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.597181 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.095961 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.096037 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.096403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.596120 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.596192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.596527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:52.596580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:53.096225 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.096307 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:53.596232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.596643 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.096242 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.096330 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.096701 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.596365 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.596434 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.596698 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:54.596749 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:55.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:55.596071 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.596147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.096173 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.596488 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.596560 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.596914 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:56.596966 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:57.096719 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.096795 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.097343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:57.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.596070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.096471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:59.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.096321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:59.096369 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:59.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.096235 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.096597 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.596299 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.596383 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.596831 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:01.096422 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.096517 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.096896 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:33:01.096952 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:33:01.596985 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.597111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.597684 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:02.096224 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:02.096288 1915516 node_ready.go:38] duration metric: took 6m0.000473787s for node "functional-837733" to be "Ready" ...
	I1222 00:33:02.099371 1915516 out.go:203] 
	W1222 00:33:02.102245 1915516 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:33:02.102270 1915516 out.go:285] * 
	W1222 00:33:02.104397 1915516 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:33:02.107416 1915516 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:33:10 functional-837733 crio[5399]: time="2025-12-22T00:33:10.617626659Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c6f1acae-c477-41f7-830a-ff01af132bb0 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.639794438Z" level=info msg="Checking image status: minikube-local-cache-test:functional-837733" id=0783c1f4-6d62-42d6-aa6c-bad49863effd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.639986293Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.640030033Z" level=info msg="Image minikube-local-cache-test:functional-837733 not found" id=0783c1f4-6d62-42d6-aa6c-bad49863effd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.640102474Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-837733 found" id=0783c1f4-6d62-42d6-aa6c-bad49863effd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.664189211Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-837733" id=416451d8-14da-48f5-a344-4f368ea7fddb name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.664330893Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-837733 not found" id=416451d8-14da-48f5-a344-4f368ea7fddb name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.664372795Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-837733 found" id=416451d8-14da-48f5-a344-4f368ea7fddb name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.688241356Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-837733" id=990cb3df-5cf2-46a3-b39f-4a45bc4e5dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.688386328Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-837733 not found" id=990cb3df-5cf2-46a3-b39f-4a45bc4e5dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.688425301Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-837733 found" id=990cb3df-5cf2-46a3-b39f-4a45bc4e5dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:12 functional-837733 crio[5399]: time="2025-12-22T00:33:12.63936917Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=411a92f8-552c-4bf1-9ee3-18d2f051d870 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:12 functional-837733 crio[5399]: time="2025-12-22T00:33:12.956776534Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=07f64d6a-449c-4e73-a331-78dea6e21094 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:12 functional-837733 crio[5399]: time="2025-12-22T00:33:12.9569554Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=07f64d6a-449c-4e73-a331-78dea6e21094 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:12 functional-837733 crio[5399]: time="2025-12-22T00:33:12.956994636Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=07f64d6a-449c-4e73-a331-78dea6e21094 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.487551834Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=80a9bc21-d691-40a7-9ef4-5dacc774178d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.48768056Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=80a9bc21-d691-40a7-9ef4-5dacc774178d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.487715636Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=80a9bc21-d691-40a7-9ef4-5dacc774178d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.51275391Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5c484a81-d278-42e3-bb02-8eae07eb5ee3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.512921051Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5c484a81-d278-42e3-bb02-8eae07eb5ee3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.512970363Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5c484a81-d278-42e3-bb02-8eae07eb5ee3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.539390396Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7954635d-1ee0-4ffa-a86e-bef1b579fc27 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.539565775Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7954635d-1ee0-4ffa-a86e-bef1b579fc27 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.539607284Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7954635d-1ee0-4ffa-a86e-bef1b579fc27 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:14 functional-837733 crio[5399]: time="2025-12-22T00:33:14.061121527Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=766679b3-3967-49ff-ae5c-f74ba52d3401 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:33:15.598367    9430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:15.599198    9430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:15.600168    9430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:15.600925    9430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:15.602610    9430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:33:15 up 1 day,  7:15,  0 user,  load average: 0.58, 0.32, 0.79
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:33:13 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:14 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 22 00:33:14 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:14 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:14 functional-837733 kubelet[9324]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:14 functional-837733 kubelet[9324]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:14 functional-837733 kubelet[9324]: E1222 00:33:14.134958    9324 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:14 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:14 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:14 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 22 00:33:14 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:14 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:14 functional-837733 kubelet[9345]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:14 functional-837733 kubelet[9345]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:14 functional-837733 kubelet[9345]: E1222 00:33:14.902373    9345 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:14 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:14 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:15 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 22 00:33:15 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:15 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:15 functional-837733 kubelet[9434]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:15 functional-837733 kubelet[9434]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:15 functional-837733 kubelet[9434]: E1222 00:33:15.660205    9434 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:15 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:15 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (353.07393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-837733 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-837733 get pods: exit status 1 (100.321631ms)

                                                
                                                
** stderr ** 
	E1222 00:33:16.807225 1921097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:16.807583 1921097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:16.809016 1921097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:16.809352 1921097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:33:16.810753 1921097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-837733 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (316.360888ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 logs -n 25: (1.034053953s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-885873 image ls --format json --alsologtostderr                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls --format table --alsologtostderr                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr                                          │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ delete         │ -p functional-885873                                                                                                                            │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ start          │ -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ start          │ -p functional-837733 --alsologtostderr -v=8                                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:26 UTC │                     │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:latest                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add minikube-local-cache-test:functional-837733                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache delete minikube-local-cache-test:functional-837733                                                                      │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl images                                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ cache          │ functional-837733 cache reload                                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ kubectl        │ functional-837733 kubectl -- --context functional-837733 get pods                                                                               │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:26:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:26:56.306253 1915516 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:26:56.306390 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306407 1915516 out.go:374] Setting ErrFile to fd 2...
	I1222 00:26:56.306413 1915516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:26:56.306666 1915516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:26:56.307062 1915516 out.go:368] Setting JSON to false
	I1222 00:26:56.307917 1915516 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112166,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:26:56.307982 1915516 start.go:143] virtualization:  
	I1222 00:26:56.313423 1915516 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:26:56.316413 1915516 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:26:56.316480 1915516 notify.go:221] Checking for updates...
	I1222 00:26:56.322104 1915516 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:26:56.324851 1915516 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:26:56.327697 1915516 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:26:56.330483 1915516 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:26:56.333306 1915516 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:26:56.336519 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:56.336644 1915516 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:26:56.361435 1915516 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:26:56.361557 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.415190 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.406057371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.415295 1915516 docker.go:319] overlay module found
	I1222 00:26:56.418305 1915516 out.go:179] * Using the docker driver based on existing profile
	I1222 00:26:56.421198 1915516 start.go:309] selected driver: docker
	I1222 00:26:56.421214 1915516 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.421302 1915516 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:26:56.421450 1915516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:26:56.481024 1915516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:26:56.471849294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:26:56.481474 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:56.481538 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:56.481593 1915516 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:26:56.484769 1915516 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:26:56.487560 1915516 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:26:56.490463 1915516 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:26:56.493542 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:56.493593 1915516 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:26:56.493605 1915516 cache.go:65] Caching tarball of preloaded images
	I1222 00:26:56.493623 1915516 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:26:56.493685 1915516 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:26:56.493695 1915516 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:26:56.493807 1915516 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:26:56.512598 1915516 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:26:56.512623 1915516 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:26:56.512639 1915516 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:26:56.512668 1915516 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:26:56.512725 1915516 start.go:364] duration metric: took 34.567µs to acquireMachinesLock for "functional-837733"
	I1222 00:26:56.512749 1915516 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:26:56.512754 1915516 fix.go:54] fixHost starting: 
	I1222 00:26:56.513015 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:26:56.528652 1915516 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:26:56.528683 1915516 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:26:56.531906 1915516 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:26:56.531939 1915516 machine.go:94] provisionDockerMachine start ...
	I1222 00:26:56.532017 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.548171 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.548493 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.548508 1915516 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:26:56.684687 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.684712 1915516 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:26:56.684788 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.701819 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.702143 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.702159 1915516 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:26:56.841689 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:26:56.841784 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:56.859107 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:56.859418 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:56.859440 1915516 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:26:56.989653 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:26:56.989683 1915516 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:26:56.989702 1915516 ubuntu.go:190] setting up certificates
	I1222 00:26:56.989718 1915516 provision.go:84] configureAuth start
	I1222 00:26:56.989779 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:57.009887 1915516 provision.go:143] copyHostCerts
	I1222 00:26:57.009938 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.009973 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:26:57.009985 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:26:57.010062 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:26:57.010160 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010187 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:26:57.010205 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:26:57.010234 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:26:57.010285 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010306 1915516 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:26:57.010311 1915516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:26:57.010345 1915516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:26:57.010415 1915516 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:26:57.201216 1915516 provision.go:177] copyRemoteCerts
	I1222 00:26:57.201293 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:26:57.201364 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.220369 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.317164 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:26:57.317296 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:26:57.334371 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:26:57.334443 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:26:57.355914 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:26:57.356016 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:26:57.374427 1915516 provision.go:87] duration metric: took 384.686066ms to configureAuth
	I1222 00:26:57.374456 1915516 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:26:57.374673 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:26:57.374803 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.396901 1915516 main.go:144] libmachine: Using SSH client type: native
	I1222 00:26:57.397214 1915516 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:26:57.397228 1915516 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:26:57.748387 1915516 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:26:57.748426 1915516 machine.go:97] duration metric: took 1.216463326s to provisionDockerMachine
	I1222 00:26:57.748438 1915516 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:26:57.748450 1915516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:26:57.748516 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:26:57.748562 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.764805 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:57.860955 1915516 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:26:57.863953 1915516 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:26:57.863971 1915516 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:26:57.863984 1915516 command_runner.go:130] > VERSION_ID="12"
	I1222 00:26:57.863988 1915516 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:26:57.863993 1915516 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:26:57.863997 1915516 command_runner.go:130] > ID=debian
	I1222 00:26:57.864002 1915516 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:26:57.864007 1915516 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:26:57.864014 1915516 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:26:57.864062 1915516 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:26:57.864078 1915516 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:26:57.864088 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:26:57.864138 1915516 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:26:57.864212 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:26:57.864218 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /etc/ssl/certs/18812192.pem
	I1222 00:26:57.864290 1915516 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:26:57.864295 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> /etc/test/nested/copy/1881219/hosts
	I1222 00:26:57.864343 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:26:57.871401 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:26:57.887952 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:26:57.904770 1915516 start.go:296] duration metric: took 156.318697ms for postStartSetup
	I1222 00:26:57.904856 1915516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:26:57.904900 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:57.924440 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.018813 1915516 command_runner.go:130] > 12%
	I1222 00:26:58.018908 1915516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:26:58.024077 1915516 command_runner.go:130] > 172G
	I1222 00:26:58.024155 1915516 fix.go:56] duration metric: took 1.511397372s for fixHost
	I1222 00:26:58.024173 1915516 start.go:83] releasing machines lock for "functional-837733", held for 1.511435197s
	I1222 00:26:58.024244 1915516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:26:58.041457 1915516 ssh_runner.go:195] Run: cat /version.json
	I1222 00:26:58.041501 1915516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:26:58.041514 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.041557 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:26:58.062715 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.064581 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:26:58.169068 1915516 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:26:58.266136 1915516 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:26:58.269129 1915516 ssh_runner.go:195] Run: systemctl --version
	I1222 00:26:58.275371 1915516 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:26:58.275406 1915516 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:26:58.275466 1915516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:26:58.310186 1915516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:26:58.314515 1915516 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:26:58.314563 1915516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:26:58.314620 1915516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:26:58.322376 1915516 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:26:58.322401 1915516 start.go:496] detecting cgroup driver to use...
	I1222 00:26:58.322452 1915516 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:26:58.322524 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:26:58.337747 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:26:58.350732 1915516 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:26:58.350819 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:26:58.365932 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:26:58.378579 1915516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:26:58.492803 1915516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:26:58.604423 1915516 docker.go:234] disabling docker service ...
	I1222 00:26:58.604496 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:26:58.619028 1915516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:26:58.631440 1915516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:26:58.747823 1915516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:26:58.866539 1915516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:26:58.885381 1915516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:26:58.901152 1915516 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1222 00:26:58.902678 1915516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:26:58.902778 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.912460 1915516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:26:58.912567 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.924743 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.935404 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.944063 1915516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:26:58.951908 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.960893 1915516 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.969826 1915516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:58.978699 1915516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:26:58.985309 1915516 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:26:58.986567 1915516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:26:58.993813 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.112143 1915516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:26:59.288646 1915516 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:26:59.288768 1915516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:26:59.292480 1915516 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1222 00:26:59.292507 1915516 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:26:59.292515 1915516 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1222 00:26:59.292522 1915516 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:26:59.292531 1915516 command_runner.go:130] > Access: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292537 1915516 command_runner.go:130] > Modify: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292542 1915516 command_runner.go:130] > Change: 2025-12-22 00:26:59.216907321 +0000
	I1222 00:26:59.292549 1915516 command_runner.go:130] >  Birth: -
	I1222 00:26:59.292591 1915516 start.go:564] Will wait 60s for crictl version
	I1222 00:26:59.292646 1915516 ssh_runner.go:195] Run: which crictl
	I1222 00:26:59.295996 1915516 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:26:59.296252 1915516 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:26:59.318308 1915516 command_runner.go:130] > Version:  0.1.0
	I1222 00:26:59.318367 1915516 command_runner.go:130] > RuntimeName:  cri-o
	I1222 00:26:59.318402 1915516 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1222 00:26:59.318421 1915516 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:26:59.320537 1915516 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:26:59.320652 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.347108 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.347172 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.347193 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.347237 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.347255 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.347273 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.347296 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.347319 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.347336 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.347352 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.347374 1915516 command_runner.go:130] >      static
	I1222 00:26:59.347390 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.347405 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.347421 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.347442 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.347466 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.347483 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.347500 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.347516 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.347534 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.350153 1915516 ssh_runner.go:195] Run: crio --version
	I1222 00:26:59.376069 1915516 command_runner.go:130] > crio version 1.34.3
	I1222 00:26:59.376144 1915516 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1222 00:26:59.376164 1915516 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1222 00:26:59.376182 1915516 command_runner.go:130] >    GitTreeState:   dirty
	I1222 00:26:59.376201 1915516 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1222 00:26:59.376235 1915516 command_runner.go:130] >    GoVersion:      go1.24.6
	I1222 00:26:59.376261 1915516 command_runner.go:130] >    Compiler:       gc
	I1222 00:26:59.376279 1915516 command_runner.go:130] >    Platform:       linux/arm64
	I1222 00:26:59.376305 1915516 command_runner.go:130] >    Linkmode:       static
	I1222 00:26:59.376329 1915516 command_runner.go:130] >    BuildTags:
	I1222 00:26:59.376347 1915516 command_runner.go:130] >      static
	I1222 00:26:59.376375 1915516 command_runner.go:130] >      netgo
	I1222 00:26:59.376391 1915516 command_runner.go:130] >      osusergo
	I1222 00:26:59.376409 1915516 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1222 00:26:59.376433 1915516 command_runner.go:130] >      seccomp
	I1222 00:26:59.376462 1915516 command_runner.go:130] >      apparmor
	I1222 00:26:59.376479 1915516 command_runner.go:130] >      selinux
	I1222 00:26:59.376495 1915516 command_runner.go:130] >    LDFlags:          unknown
	I1222 00:26:59.376513 1915516 command_runner.go:130] >    SeccompEnabled:   true
	I1222 00:26:59.376545 1915516 command_runner.go:130] >    AppArmorEnabled:  false
	I1222 00:26:59.383960 1915516 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:26:59.386763 1915516 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:26:59.402614 1915516 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:26:59.406608 1915516 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:26:59.406693 1915516 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:26:59.406800 1915516 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:26:59.406861 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.440185 1915516 command_runner.go:130] > {
	I1222 00:26:59.440203 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.440208 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440216 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.440220 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440226 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.440230 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440234 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440243 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.440251 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.440254 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440259 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.440263 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440270 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440273 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440277 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440284 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.440288 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440301 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.440305 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440309 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440317 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.440326 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.440329 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440333 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.440337 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440344 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440347 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440350 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440357 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.440361 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440366 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.440369 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440373 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440381 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.440388 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.440392 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440396 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.440404 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.440408 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440411 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440414 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440421 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.440424 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440429 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.440433 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440437 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440445 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.440452 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.440455 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440459 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.440466 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440470 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440474 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440482 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440486 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440489 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440492 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440499 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.440502 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440509 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.440512 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440516 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440524 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.440532 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.440535 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440539 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.440542 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440546 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440549 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440553 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440557 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440560 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440563 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440569 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.440573 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440578 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.440582 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440586 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440594 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.440602 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.440606 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440610 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.440613 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440624 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440627 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440631 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440634 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440637 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440640 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440647 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.440651 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440656 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.440659 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440663 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440671 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.440679 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.440681 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440685 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.440689 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440693 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440697 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440700 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440706 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.440710 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440714 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.440718 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440721 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440729 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.440745 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.440748 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440752 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.440755 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440759 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.440762 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440765 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440769 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.440775 1915516 command_runner.go:130] >     },
	I1222 00:26:59.440779 1915516 command_runner.go:130] >     {
	I1222 00:26:59.440785 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.440789 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.440793 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.440797 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440800 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.440808 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.440816 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.440819 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.440842 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.440846 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.440850 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.440853 1915516 command_runner.go:130] >       },
	I1222 00:26:59.440857 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.440860 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.440863 1915516 command_runner.go:130] >     }
	I1222 00:26:59.440865 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.440868 1915516 command_runner.go:130] > }
	I1222 00:26:59.441031 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.441039 1915516 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:26:59.441099 1915516 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:26:59.463515 1915516 command_runner.go:130] > {
	I1222 00:26:59.463533 1915516 command_runner.go:130] >   "images":  [
	I1222 00:26:59.463538 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463547 1915516 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:26:59.463552 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463566 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:26:59.463570 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463574 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463582 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1222 00:26:59.463590 1915516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1222 00:26:59.463594 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463598 1915516 command_runner.go:130] >       "size":  "111333938",
	I1222 00:26:59.463602 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463606 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463610 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463613 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463619 1915516 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:26:59.463624 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463629 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:26:59.463632 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463636 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463655 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1222 00:26:59.463664 1915516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:26:59.463667 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463671 1915516 command_runner.go:130] >       "size":  "29037500",
	I1222 00:26:59.463675 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463680 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463683 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463686 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463693 1915516 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:26:59.463697 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463703 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:26:59.463706 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463710 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463718 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1222 00:26:59.463726 1915516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1222 00:26:59.463730 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463734 1915516 command_runner.go:130] >       "size":  "74491780",
	I1222 00:26:59.463741 1915516 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:26:59.463745 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463748 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463752 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463758 1915516 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:26:59.463761 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463767 1915516 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:26:59.463771 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463775 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463782 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1222 00:26:59.463790 1915516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1222 00:26:59.463793 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463797 1915516 command_runner.go:130] >       "size":  "60850387",
	I1222 00:26:59.463801 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463805 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463808 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463816 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463819 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463823 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463826 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463832 1915516 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:26:59.463836 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463841 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:26:59.463844 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463848 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463855 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1222 00:26:59.463863 1915516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1222 00:26:59.463867 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463871 1915516 command_runner.go:130] >       "size":  "85015535",
	I1222 00:26:59.463875 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463878 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463882 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463885 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463893 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463896 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463899 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463905 1915516 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:26:59.463909 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463914 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:26:59.463917 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463921 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463929 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1222 00:26:59.463937 1915516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:26:59.463940 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463944 1915516 command_runner.go:130] >       "size":  "72170325",
	I1222 00:26:59.463947 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.463951 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.463954 1915516 command_runner.go:130] >       },
	I1222 00:26:59.463958 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.463962 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.463965 1915516 command_runner.go:130] >     },
	I1222 00:26:59.463968 1915516 command_runner.go:130] >     {
	I1222 00:26:59.463974 1915516 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:26:59.463979 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.463984 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:26:59.463987 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.463991 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.463998 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1222 00:26:59.464006 1915516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:26:59.464009 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464013 1915516 command_runner.go:130] >       "size":  "74107287",
	I1222 00:26:59.464017 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464021 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464024 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464027 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464033 1915516 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:26:59.464040 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464046 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:26:59.464050 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464053 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464061 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1222 00:26:59.464078 1915516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1222 00:26:59.464081 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464085 1915516 command_runner.go:130] >       "size":  "49822549",
	I1222 00:26:59.464089 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464093 1915516 command_runner.go:130] >         "value":  "0"
	I1222 00:26:59.464096 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464100 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464103 1915516 command_runner.go:130] >       "pinned":  false
	I1222 00:26:59.464106 1915516 command_runner.go:130] >     },
	I1222 00:26:59.464109 1915516 command_runner.go:130] >     {
	I1222 00:26:59.464115 1915516 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:26:59.464119 1915516 command_runner.go:130] >       "repoTags":  [
	I1222 00:26:59.464123 1915516 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.464126 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464130 1915516 command_runner.go:130] >       "repoDigests":  [
	I1222 00:26:59.464137 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1222 00:26:59.464145 1915516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1222 00:26:59.464148 1915516 command_runner.go:130] >       ],
	I1222 00:26:59.464151 1915516 command_runner.go:130] >       "size":  "519884",
	I1222 00:26:59.464155 1915516 command_runner.go:130] >       "uid":  {
	I1222 00:26:59.464158 1915516 command_runner.go:130] >         "value":  "65535"
	I1222 00:26:59.464161 1915516 command_runner.go:130] >       },
	I1222 00:26:59.464165 1915516 command_runner.go:130] >       "username":  "",
	I1222 00:26:59.464168 1915516 command_runner.go:130] >       "pinned":  true
	I1222 00:26:59.464171 1915516 command_runner.go:130] >     }
	I1222 00:26:59.464174 1915516 command_runner.go:130] >   ]
	I1222 00:26:59.464176 1915516 command_runner.go:130] > }
	I1222 00:26:59.467117 1915516 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:26:59.467182 1915516 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:26:59.467204 1915516 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:26:59.467349 1915516 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:26:59.467469 1915516 ssh_runner.go:195] Run: crio config
	I1222 00:26:59.517634 1915516 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1222 00:26:59.517672 1915516 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1222 00:26:59.517681 1915516 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1222 00:26:59.517684 1915516 command_runner.go:130] > #
	I1222 00:26:59.517692 1915516 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1222 00:26:59.517698 1915516 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1222 00:26:59.517704 1915516 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1222 00:26:59.517712 1915516 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1222 00:26:59.517716 1915516 command_runner.go:130] > # reload'.
	I1222 00:26:59.517723 1915516 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1222 00:26:59.517729 1915516 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1222 00:26:59.517735 1915516 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1222 00:26:59.517742 1915516 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1222 00:26:59.517745 1915516 command_runner.go:130] > [crio]
	I1222 00:26:59.517751 1915516 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1222 00:26:59.517756 1915516 command_runner.go:130] > # containers images, in this directory.
	I1222 00:26:59.517994 1915516 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1222 00:26:59.518006 1915516 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1222 00:26:59.518014 1915516 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1222 00:26:59.518022 1915516 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1222 00:26:59.518026 1915516 command_runner.go:130] > # imagestore = ""
	I1222 00:26:59.518032 1915516 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1222 00:26:59.518043 1915516 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1222 00:26:59.518048 1915516 command_runner.go:130] > # storage_driver = "overlay"
	I1222 00:26:59.518053 1915516 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1222 00:26:59.518059 1915516 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1222 00:26:59.518209 1915516 command_runner.go:130] > # storage_option = [
	I1222 00:26:59.518216 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.518229 1915516 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1222 00:26:59.518237 1915516 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1222 00:26:59.518375 1915516 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1222 00:26:59.518385 1915516 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1222 00:26:59.518392 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1222 00:26:59.518406 1915516 command_runner.go:130] > # always happen on a node reboot
	I1222 00:26:59.518412 1915516 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1222 00:26:59.518638 1915516 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1222 00:26:59.518654 1915516 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1222 00:26:59.518660 1915516 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1222 00:26:59.521919 1915516 command_runner.go:130] > # version_file_persist = ""
	I1222 00:26:59.521945 1915516 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1222 00:26:59.521986 1915516 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1222 00:26:59.521998 1915516 command_runner.go:130] > # internal_wipe = true
	I1222 00:26:59.522012 1915516 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1222 00:26:59.522023 1915516 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1222 00:26:59.522028 1915516 command_runner.go:130] > # internal_repair = true
	I1222 00:26:59.522049 1915516 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1222 00:26:59.522075 1915516 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1222 00:26:59.522087 1915516 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1222 00:26:59.522103 1915516 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1222 00:26:59.522130 1915516 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1222 00:26:59.522140 1915516 command_runner.go:130] > [crio.api]
	I1222 00:26:59.522156 1915516 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1222 00:26:59.522172 1915516 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1222 00:26:59.522199 1915516 command_runner.go:130] > # IP address on which the stream server will listen.
	I1222 00:26:59.522210 1915516 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1222 00:26:59.522227 1915516 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1222 00:26:59.522243 1915516 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1222 00:26:59.522251 1915516 command_runner.go:130] > # stream_port = "0"
	I1222 00:26:59.522257 1915516 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1222 00:26:59.522262 1915516 command_runner.go:130] > # stream_enable_tls = false
	I1222 00:26:59.522297 1915516 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1222 00:26:59.522308 1915516 command_runner.go:130] > # stream_idle_timeout = ""
	I1222 00:26:59.522315 1915516 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1222 00:26:59.522330 1915516 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522334 1915516 command_runner.go:130] > # stream_tls_cert = ""
	I1222 00:26:59.522352 1915516 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1222 00:26:59.522421 1915516 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1222 00:26:59.522433 1915516 command_runner.go:130] > # stream_tls_key = ""
	I1222 00:26:59.522440 1915516 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1222 00:26:59.522455 1915516 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1222 00:26:59.522468 1915516 command_runner.go:130] > # automatically pick up the changes.
	I1222 00:26:59.522477 1915516 command_runner.go:130] > # stream_tls_ca = ""
	I1222 00:26:59.522505 1915516 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522519 1915516 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1222 00:26:59.522527 1915516 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1222 00:26:59.522540 1915516 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1222 00:26:59.522547 1915516 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1222 00:26:59.522556 1915516 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1222 00:26:59.522560 1915516 command_runner.go:130] > [crio.runtime]
	I1222 00:26:59.522574 1915516 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1222 00:26:59.522584 1915516 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1222 00:26:59.522588 1915516 command_runner.go:130] > # "nofile=1024:2048"
	I1222 00:26:59.522599 1915516 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1222 00:26:59.522607 1915516 command_runner.go:130] > # default_ulimits = [
	I1222 00:26:59.522611 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522624 1915516 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1222 00:26:59.522631 1915516 command_runner.go:130] > # no_pivot = false
	I1222 00:26:59.522642 1915516 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1222 00:26:59.522652 1915516 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1222 00:26:59.522657 1915516 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1222 00:26:59.522672 1915516 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1222 00:26:59.522681 1915516 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1222 00:26:59.522689 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522700 1915516 command_runner.go:130] > # conmon = ""
	I1222 00:26:59.522705 1915516 command_runner.go:130] > # Cgroup setting for conmon
	I1222 00:26:59.522717 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1222 00:26:59.522721 1915516 command_runner.go:130] > conmon_cgroup = "pod"
	I1222 00:26:59.522733 1915516 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1222 00:26:59.522741 1915516 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1222 00:26:59.522760 1915516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1222 00:26:59.522767 1915516 command_runner.go:130] > # conmon_env = [
	I1222 00:26:59.522770 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522776 1915516 command_runner.go:130] > # Additional environment variables to set for all the
	I1222 00:26:59.522786 1915516 command_runner.go:130] > # containers. These are overridden if set in the
	I1222 00:26:59.522796 1915516 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1222 00:26:59.522803 1915516 command_runner.go:130] > # default_env = [
	I1222 00:26:59.522807 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.522813 1915516 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1222 00:26:59.522829 1915516 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1222 00:26:59.522839 1915516 command_runner.go:130] > # selinux = false
	I1222 00:26:59.522846 1915516 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1222 00:26:59.522861 1915516 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1222 00:26:59.522870 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522874 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.522886 1915516 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1222 00:26:59.522895 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522900 1915516 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1222 00:26:59.522911 1915516 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1222 00:26:59.522926 1915516 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1222 00:26:59.522936 1915516 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1222 00:26:59.522948 1915516 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1222 00:26:59.522956 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.522962 1915516 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1222 00:26:59.522976 1915516 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1222 00:26:59.522984 1915516 command_runner.go:130] > # the cgroup blockio controller.
	I1222 00:26:59.522992 1915516 command_runner.go:130] > # blockio_config_file = ""
	I1222 00:26:59.522999 1915516 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1222 00:26:59.523010 1915516 command_runner.go:130] > # blockio parameters.
	I1222 00:26:59.523015 1915516 command_runner.go:130] > # blockio_reload = false
	I1222 00:26:59.523027 1915516 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1222 00:26:59.523033 1915516 command_runner.go:130] > # irqbalance daemon.
	I1222 00:26:59.523049 1915516 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1222 00:26:59.523059 1915516 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1222 00:26:59.523067 1915516 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1222 00:26:59.523081 1915516 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1222 00:26:59.523091 1915516 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1222 00:26:59.523103 1915516 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1222 00:26:59.523112 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523116 1915516 command_runner.go:130] > # rdt_config_file = ""
	I1222 00:26:59.523122 1915516 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1222 00:26:59.523133 1915516 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1222 00:26:59.523140 1915516 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1222 00:26:59.523148 1915516 command_runner.go:130] > # separate_pull_cgroup = ""
	I1222 00:26:59.523159 1915516 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1222 00:26:59.523202 1915516 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1222 00:26:59.523210 1915516 command_runner.go:130] > # will be added.
	I1222 00:26:59.523215 1915516 command_runner.go:130] > # default_capabilities = [
	I1222 00:26:59.523219 1915516 command_runner.go:130] > # 	"CHOWN",
	I1222 00:26:59.523223 1915516 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1222 00:26:59.523380 1915516 command_runner.go:130] > # 	"FSETID",
	I1222 00:26:59.523391 1915516 command_runner.go:130] > # 	"FOWNER",
	I1222 00:26:59.523395 1915516 command_runner.go:130] > # 	"SETGID",
	I1222 00:26:59.523399 1915516 command_runner.go:130] > # 	"SETUID",
	I1222 00:26:59.523427 1915516 command_runner.go:130] > # 	"SETPCAP",
	I1222 00:26:59.523432 1915516 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1222 00:26:59.523436 1915516 command_runner.go:130] > # 	"KILL",
	I1222 00:26:59.523439 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523448 1915516 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1222 00:26:59.523464 1915516 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1222 00:26:59.523470 1915516 command_runner.go:130] > # add_inheritable_capabilities = false
	I1222 00:26:59.523476 1915516 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1222 00:26:59.523482 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523486 1915516 command_runner.go:130] > default_sysctls = [
	I1222 00:26:59.523491 1915516 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1222 00:26:59.523494 1915516 command_runner.go:130] > ]
	I1222 00:26:59.523499 1915516 command_runner.go:130] > # List of devices on the host that a
	I1222 00:26:59.523506 1915516 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1222 00:26:59.523509 1915516 command_runner.go:130] > # allowed_devices = [
	I1222 00:26:59.523513 1915516 command_runner.go:130] > # 	"/dev/fuse",
	I1222 00:26:59.523517 1915516 command_runner.go:130] > # 	"/dev/net/tun",
	I1222 00:26:59.523520 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523525 1915516 command_runner.go:130] > # List of additional devices. specified as
	I1222 00:26:59.523533 1915516 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1222 00:26:59.523538 1915516 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1222 00:26:59.523544 1915516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1222 00:26:59.523548 1915516 command_runner.go:130] > # additional_devices = [
	I1222 00:26:59.523551 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523557 1915516 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1222 00:26:59.523561 1915516 command_runner.go:130] > # cdi_spec_dirs = [
	I1222 00:26:59.523565 1915516 command_runner.go:130] > # 	"/etc/cdi",
	I1222 00:26:59.523569 1915516 command_runner.go:130] > # 	"/var/run/cdi",
	I1222 00:26:59.523574 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523581 1915516 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1222 00:26:59.523588 1915516 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1222 00:26:59.523591 1915516 command_runner.go:130] > # Defaults to false.
	I1222 00:26:59.523596 1915516 command_runner.go:130] > # device_ownership_from_security_context = false
	I1222 00:26:59.523603 1915516 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1222 00:26:59.523609 1915516 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1222 00:26:59.523612 1915516 command_runner.go:130] > # hooks_dir = [
	I1222 00:26:59.523617 1915516 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1222 00:26:59.523622 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.523628 1915516 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1222 00:26:59.523635 1915516 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1222 00:26:59.523642 1915516 command_runner.go:130] > # its default mounts from the following two files:
	I1222 00:26:59.523644 1915516 command_runner.go:130] > #
	I1222 00:26:59.523651 1915516 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1222 00:26:59.523658 1915516 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1222 00:26:59.523663 1915516 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1222 00:26:59.523666 1915516 command_runner.go:130] > #
	I1222 00:26:59.523672 1915516 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1222 00:26:59.523679 1915516 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1222 00:26:59.523686 1915516 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1222 00:26:59.523692 1915516 command_runner.go:130] > #      only add mounts it finds in this file.
	I1222 00:26:59.523695 1915516 command_runner.go:130] > #
	I1222 00:26:59.523700 1915516 command_runner.go:130] > # default_mounts_file = ""
	I1222 00:26:59.523706 1915516 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1222 00:26:59.523713 1915516 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1222 00:26:59.523716 1915516 command_runner.go:130] > # pids_limit = -1
	I1222 00:26:59.523723 1915516 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1222 00:26:59.523729 1915516 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1222 00:26:59.523735 1915516 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1222 00:26:59.523744 1915516 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1222 00:26:59.523748 1915516 command_runner.go:130] > # log_size_max = -1
	I1222 00:26:59.523755 1915516 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1222 00:26:59.523759 1915516 command_runner.go:130] > # log_to_journald = false
	I1222 00:26:59.523765 1915516 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1222 00:26:59.523770 1915516 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1222 00:26:59.523775 1915516 command_runner.go:130] > # Path to directory for container attach sockets.
	I1222 00:26:59.523779 1915516 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1222 00:26:59.523784 1915516 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1222 00:26:59.523788 1915516 command_runner.go:130] > # bind_mount_prefix = ""
	I1222 00:26:59.523794 1915516 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1222 00:26:59.523798 1915516 command_runner.go:130] > # read_only = false
	I1222 00:26:59.523804 1915516 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1222 00:26:59.523810 1915516 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1222 00:26:59.523815 1915516 command_runner.go:130] > # live configuration reload.
	I1222 00:26:59.523818 1915516 command_runner.go:130] > # log_level = "info"
	I1222 00:26:59.523825 1915516 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1222 00:26:59.523830 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.523834 1915516 command_runner.go:130] > # log_filter = ""
	I1222 00:26:59.523841 1915516 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523847 1915516 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1222 00:26:59.523850 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523858 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523862 1915516 command_runner.go:130] > # uid_mappings = ""
	I1222 00:26:59.523868 1915516 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1222 00:26:59.523874 1915516 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1222 00:26:59.523879 1915516 command_runner.go:130] > # separated by comma.
	I1222 00:26:59.523887 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523891 1915516 command_runner.go:130] > # gid_mappings = ""
	I1222 00:26:59.523897 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1222 00:26:59.523903 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523909 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523917 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523921 1915516 command_runner.go:130] > # minimum_mappable_uid = -1
	I1222 00:26:59.523928 1915516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1222 00:26:59.523934 1915516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1222 00:26:59.523940 1915516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1222 00:26:59.523948 1915516 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1222 00:26:59.523952 1915516 command_runner.go:130] > # minimum_mappable_gid = -1
	I1222 00:26:59.523958 1915516 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1222 00:26:59.523964 1915516 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1222 00:26:59.523970 1915516 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1222 00:26:59.523975 1915516 command_runner.go:130] > # ctr_stop_timeout = 30
	I1222 00:26:59.523981 1915516 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1222 00:26:59.523987 1915516 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1222 00:26:59.523992 1915516 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1222 00:26:59.523997 1915516 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1222 00:26:59.524001 1915516 command_runner.go:130] > # drop_infra_ctr = true
	I1222 00:26:59.524011 1915516 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1222 00:26:59.524017 1915516 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1222 00:26:59.524025 1915516 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1222 00:26:59.524033 1915516 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1222 00:26:59.524041 1915516 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1222 00:26:59.524048 1915516 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1222 00:26:59.524053 1915516 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1222 00:26:59.524059 1915516 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1222 00:26:59.524063 1915516 command_runner.go:130] > # shared_cpuset = ""
	I1222 00:26:59.524068 1915516 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1222 00:26:59.524074 1915516 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1222 00:26:59.524078 1915516 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1222 00:26:59.524086 1915516 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1222 00:26:59.524090 1915516 command_runner.go:130] > # pinns_path = ""
	I1222 00:26:59.524095 1915516 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1222 00:26:59.524101 1915516 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1222 00:26:59.524106 1915516 command_runner.go:130] > # enable_criu_support = true
	I1222 00:26:59.524111 1915516 command_runner.go:130] > # Enable/disable the generation of the container,
	I1222 00:26:59.524117 1915516 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1222 00:26:59.524121 1915516 command_runner.go:130] > # enable_pod_events = false
	I1222 00:26:59.524128 1915516 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1222 00:26:59.524133 1915516 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1222 00:26:59.524137 1915516 command_runner.go:130] > # default_runtime = "crun"
	I1222 00:26:59.524142 1915516 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1222 00:26:59.524150 1915516 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1222 00:26:59.524159 1915516 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1222 00:26:59.524164 1915516 command_runner.go:130] > # creation as a file is not desired either.
	I1222 00:26:59.524173 1915516 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1222 00:26:59.524178 1915516 command_runner.go:130] > # the hostname is being managed dynamically.
	I1222 00:26:59.524182 1915516 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1222 00:26:59.524185 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.524191 1915516 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1222 00:26:59.524198 1915516 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1222 00:26:59.524204 1915516 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1222 00:26:59.524209 1915516 command_runner.go:130] > # Each entry in the table should follow the format:
	I1222 00:26:59.524212 1915516 command_runner.go:130] > #
	I1222 00:26:59.524217 1915516 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1222 00:26:59.524221 1915516 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1222 00:26:59.524226 1915516 command_runner.go:130] > # runtime_type = "oci"
	I1222 00:26:59.524230 1915516 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1222 00:26:59.524236 1915516 command_runner.go:130] > # inherit_default_runtime = false
	I1222 00:26:59.524246 1915516 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1222 00:26:59.524250 1915516 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1222 00:26:59.524255 1915516 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1222 00:26:59.524259 1915516 command_runner.go:130] > # monitor_env = []
	I1222 00:26:59.524264 1915516 command_runner.go:130] > # privileged_without_host_devices = false
	I1222 00:26:59.524268 1915516 command_runner.go:130] > # allowed_annotations = []
	I1222 00:26:59.524274 1915516 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1222 00:26:59.524277 1915516 command_runner.go:130] > # no_sync_log = false
	I1222 00:26:59.524281 1915516 command_runner.go:130] > # default_annotations = {}
	I1222 00:26:59.524285 1915516 command_runner.go:130] > # stream_websockets = false
	I1222 00:26:59.524289 1915516 command_runner.go:130] > # seccomp_profile = ""
	I1222 00:26:59.524337 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.524343 1915516 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1222 00:26:59.524349 1915516 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1222 00:26:59.524356 1915516 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1222 00:26:59.524362 1915516 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1222 00:26:59.524365 1915516 command_runner.go:130] > #   in $PATH.
	I1222 00:26:59.524371 1915516 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1222 00:26:59.524376 1915516 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1222 00:26:59.524382 1915516 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1222 00:26:59.524386 1915516 command_runner.go:130] > #   state.
	I1222 00:26:59.524392 1915516 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1222 00:26:59.524399 1915516 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1222 00:26:59.524405 1915516 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1222 00:26:59.524411 1915516 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1222 00:26:59.524417 1915516 command_runner.go:130] > #   the values from the default runtime on load time.
	I1222 00:26:59.524423 1915516 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1222 00:26:59.524429 1915516 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1222 00:26:59.524435 1915516 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1222 00:26:59.524441 1915516 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1222 00:26:59.524446 1915516 command_runner.go:130] > #   The currently recognized values are:
	I1222 00:26:59.524453 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1222 00:26:59.524460 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1222 00:26:59.524466 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1222 00:26:59.524472 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1222 00:26:59.524479 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1222 00:26:59.524489 1915516 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1222 00:26:59.524496 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1222 00:26:59.524502 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1222 00:26:59.524508 1915516 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1222 00:26:59.524514 1915516 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1222 00:26:59.524521 1915516 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1222 00:26:59.524528 1915516 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1222 00:26:59.524534 1915516 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1222 00:26:59.524540 1915516 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1222 00:26:59.524546 1915516 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1222 00:26:59.524554 1915516 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1222 00:26:59.524560 1915516 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1222 00:26:59.524564 1915516 command_runner.go:130] > #   deprecated option "conmon".
	I1222 00:26:59.524571 1915516 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1222 00:26:59.524576 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1222 00:26:59.524583 1915516 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1222 00:26:59.524587 1915516 command_runner.go:130] > #   should be moved to the container's cgroup
	I1222 00:26:59.524594 1915516 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1222 00:26:59.524599 1915516 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1222 00:26:59.524606 1915516 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1222 00:26:59.524610 1915516 command_runner.go:130] > #   conmon-rs by using:
	I1222 00:26:59.524617 1915516 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1222 00:26:59.524625 1915516 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1222 00:26:59.524633 1915516 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1222 00:26:59.524640 1915516 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1222 00:26:59.524645 1915516 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1222 00:26:59.524652 1915516 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1222 00:26:59.524660 1915516 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1222 00:26:59.524665 1915516 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1222 00:26:59.524673 1915516 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1222 00:26:59.524683 1915516 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1222 00:26:59.524688 1915516 command_runner.go:130] > #   when a machine crash happens.
	I1222 00:26:59.524695 1915516 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1222 00:26:59.524703 1915516 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1222 00:26:59.524711 1915516 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1222 00:26:59.524715 1915516 command_runner.go:130] > #   seccomp profile for the runtime.
	I1222 00:26:59.524721 1915516 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1222 00:26:59.524729 1915516 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1222 00:26:59.524732 1915516 command_runner.go:130] > #
	I1222 00:26:59.524737 1915516 command_runner.go:130] > # Using the seccomp notifier feature:
	I1222 00:26:59.524739 1915516 command_runner.go:130] > #
	I1222 00:26:59.524745 1915516 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1222 00:26:59.524752 1915516 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1222 00:26:59.524755 1915516 command_runner.go:130] > #
	I1222 00:26:59.524761 1915516 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1222 00:26:59.524768 1915516 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1222 00:26:59.524770 1915516 command_runner.go:130] > #
	I1222 00:26:59.524776 1915516 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1222 00:26:59.524780 1915516 command_runner.go:130] > # feature.
	I1222 00:26:59.524783 1915516 command_runner.go:130] > #
	I1222 00:26:59.524789 1915516 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1222 00:26:59.524795 1915516 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1222 00:26:59.524801 1915516 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1222 00:26:59.524807 1915516 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1222 00:26:59.524813 1915516 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1222 00:26:59.524815 1915516 command_runner.go:130] > #
	I1222 00:26:59.524834 1915516 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1222 00:26:59.524841 1915516 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1222 00:26:59.524844 1915516 command_runner.go:130] > #
	I1222 00:26:59.524855 1915516 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1222 00:26:59.524861 1915516 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1222 00:26:59.524865 1915516 command_runner.go:130] > #
	I1222 00:26:59.524871 1915516 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1222 00:26:59.524877 1915516 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1222 00:26:59.524880 1915516 command_runner.go:130] > # limitation.
	I1222 00:26:59.524884 1915516 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1222 00:26:59.524890 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1222 00:26:59.524894 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524900 1915516 command_runner.go:130] > runtime_root = "/run/crun"
	I1222 00:26:59.524904 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524908 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524912 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524917 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524921 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524924 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524931 1915516 command_runner.go:130] > allowed_annotations = [
	I1222 00:26:59.524936 1915516 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1222 00:26:59.524939 1915516 command_runner.go:130] > ]
	I1222 00:26:59.524944 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524948 1915516 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1222 00:26:59.524952 1915516 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1222 00:26:59.524955 1915516 command_runner.go:130] > runtime_type = ""
	I1222 00:26:59.524959 1915516 command_runner.go:130] > runtime_root = "/run/runc"
	I1222 00:26:59.524964 1915516 command_runner.go:130] > inherit_default_runtime = false
	I1222 00:26:59.524967 1915516 command_runner.go:130] > runtime_config_path = ""
	I1222 00:26:59.524972 1915516 command_runner.go:130] > container_min_memory = ""
	I1222 00:26:59.524976 1915516 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1222 00:26:59.524980 1915516 command_runner.go:130] > monitor_cgroup = "pod"
	I1222 00:26:59.524984 1915516 command_runner.go:130] > monitor_exec_cgroup = ""
	I1222 00:26:59.524988 1915516 command_runner.go:130] > privileged_without_host_devices = false
	I1222 00:26:59.524995 1915516 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1222 00:26:59.525000 1915516 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1222 00:26:59.525007 1915516 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1222 00:26:59.525015 1915516 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1222 00:26:59.525027 1915516 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1222 00:26:59.525037 1915516 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1222 00:26:59.525044 1915516 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1222 00:26:59.525050 1915516 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1222 00:26:59.525059 1915516 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1222 00:26:59.525068 1915516 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1222 00:26:59.525074 1915516 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1222 00:26:59.525081 1915516 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1222 00:26:59.525085 1915516 command_runner.go:130] > # Example:
	I1222 00:26:59.525089 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1222 00:26:59.525095 1915516 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1222 00:26:59.525100 1915516 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1222 00:26:59.525108 1915516 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1222 00:26:59.525111 1915516 command_runner.go:130] > # cpuset = "0-1"
	I1222 00:26:59.525115 1915516 command_runner.go:130] > # cpushares = "5"
	I1222 00:26:59.525119 1915516 command_runner.go:130] > # cpuquota = "1000"
	I1222 00:26:59.525122 1915516 command_runner.go:130] > # cpuperiod = "100000"
	I1222 00:26:59.525126 1915516 command_runner.go:130] > # cpulimit = "35"
	I1222 00:26:59.525129 1915516 command_runner.go:130] > # Where:
	I1222 00:26:59.525133 1915516 command_runner.go:130] > # The workload name is workload-type.
	I1222 00:26:59.525141 1915516 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1222 00:26:59.525147 1915516 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1222 00:26:59.525153 1915516 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1222 00:26:59.525161 1915516 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1222 00:26:59.525166 1915516 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1222 00:26:59.525171 1915516 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1222 00:26:59.525178 1915516 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1222 00:26:59.525182 1915516 command_runner.go:130] > # Default value is set to true
	I1222 00:26:59.525187 1915516 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1222 00:26:59.525192 1915516 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1222 00:26:59.525197 1915516 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1222 00:26:59.525201 1915516 command_runner.go:130] > # Default value is set to 'false'
	I1222 00:26:59.525205 1915516 command_runner.go:130] > # disable_hostport_mapping = false
	I1222 00:26:59.525211 1915516 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1222 00:26:59.525219 1915516 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1222 00:26:59.525222 1915516 command_runner.go:130] > # timezone = ""
	I1222 00:26:59.525229 1915516 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1222 00:26:59.525231 1915516 command_runner.go:130] > #
	I1222 00:26:59.525238 1915516 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1222 00:26:59.525244 1915516 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1222 00:26:59.525248 1915516 command_runner.go:130] > [crio.image]
	I1222 00:26:59.525254 1915516 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1222 00:26:59.525258 1915516 command_runner.go:130] > # default_transport = "docker://"
	I1222 00:26:59.525363 1915516 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1222 00:26:59.525377 1915516 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525381 1915516 command_runner.go:130] > # global_auth_file = ""
	I1222 00:26:59.525392 1915516 command_runner.go:130] > # The image used to instantiate infra containers.
	I1222 00:26:59.525398 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525403 1915516 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1222 00:26:59.525414 1915516 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1222 00:26:59.525425 1915516 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1222 00:26:59.525432 1915516 command_runner.go:130] > # This option supports live configuration reload.
	I1222 00:26:59.525436 1915516 command_runner.go:130] > # pause_image_auth_file = ""
	I1222 00:26:59.525442 1915516 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1222 00:26:59.525450 1915516 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1222 00:26:59.525460 1915516 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1222 00:26:59.525466 1915516 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1222 00:26:59.525474 1915516 command_runner.go:130] > # pause_command = "/pause"
	I1222 00:26:59.525480 1915516 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1222 00:26:59.525491 1915516 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1222 00:26:59.525499 1915516 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1222 00:26:59.525509 1915516 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1222 00:26:59.525515 1915516 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1222 00:26:59.525521 1915516 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1222 00:26:59.525527 1915516 command_runner.go:130] > # pinned_images = [
	I1222 00:26:59.525530 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525536 1915516 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1222 00:26:59.525545 1915516 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1222 00:26:59.525557 1915516 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1222 00:26:59.525563 1915516 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1222 00:26:59.525572 1915516 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1222 00:26:59.525578 1915516 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1222 00:26:59.525584 1915516 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1222 00:26:59.525594 1915516 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1222 00:26:59.525601 1915516 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1222 00:26:59.525608 1915516 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1222 00:26:59.525616 1915516 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1222 00:26:59.525621 1915516 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1222 00:26:59.525627 1915516 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1222 00:26:59.525634 1915516 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1222 00:26:59.525642 1915516 command_runner.go:130] > # changing them here.
	I1222 00:26:59.525648 1915516 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1222 00:26:59.525652 1915516 command_runner.go:130] > # insecure_registries = [
	I1222 00:26:59.525659 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525666 1915516 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1222 00:26:59.525676 1915516 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1222 00:26:59.525684 1915516 command_runner.go:130] > # image_volumes = "mkdir"
	I1222 00:26:59.525692 1915516 command_runner.go:130] > # Temporary directory to use for storing big files
	I1222 00:26:59.525697 1915516 command_runner.go:130] > # big_files_temporary_dir = ""
	I1222 00:26:59.525703 1915516 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1222 00:26:59.525711 1915516 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1222 00:26:59.525717 1915516 command_runner.go:130] > # auto_reload_registries = false
	I1222 00:26:59.525724 1915516 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1222 00:26:59.525734 1915516 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1222 00:26:59.525744 1915516 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1222 00:26:59.525748 1915516 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1222 00:26:59.525770 1915516 command_runner.go:130] > # The mode of short name resolution.
	I1222 00:26:59.525780 1915516 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1222 00:26:59.525788 1915516 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1222 00:26:59.525794 1915516 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1222 00:26:59.525800 1915516 command_runner.go:130] > # short_name_mode = "enforcing"
	I1222 00:26:59.525807 1915516 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1222 00:26:59.525813 1915516 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1222 00:26:59.525821 1915516 command_runner.go:130] > # oci_artifact_mount_support = true
	I1222 00:26:59.525829 1915516 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1222 00:26:59.525836 1915516 command_runner.go:130] > # CNI plugins.
	I1222 00:26:59.525843 1915516 command_runner.go:130] > [crio.network]
	I1222 00:26:59.525850 1915516 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1222 00:26:59.525859 1915516 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1222 00:26:59.525863 1915516 command_runner.go:130] > # cni_default_network = ""
	I1222 00:26:59.525869 1915516 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1222 00:26:59.525877 1915516 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1222 00:26:59.525883 1915516 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1222 00:26:59.525887 1915516 command_runner.go:130] > # plugin_dirs = [
	I1222 00:26:59.525891 1915516 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1222 00:26:59.525897 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525903 1915516 command_runner.go:130] > # List of included pod metrics.
	I1222 00:26:59.525908 1915516 command_runner.go:130] > # included_pod_metrics = [
	I1222 00:26:59.525913 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.525920 1915516 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1222 00:26:59.525926 1915516 command_runner.go:130] > [crio.metrics]
	I1222 00:26:59.525932 1915516 command_runner.go:130] > # Globally enable or disable metrics support.
	I1222 00:26:59.525936 1915516 command_runner.go:130] > # enable_metrics = false
	I1222 00:26:59.525945 1915516 command_runner.go:130] > # Specify enabled metrics collectors.
	I1222 00:26:59.525950 1915516 command_runner.go:130] > # Per default all metrics are enabled.
	I1222 00:26:59.525957 1915516 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1222 00:26:59.525969 1915516 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1222 00:26:59.525976 1915516 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1222 00:26:59.525981 1915516 command_runner.go:130] > # metrics_collectors = [
	I1222 00:26:59.525987 1915516 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1222 00:26:59.525992 1915516 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1222 00:26:59.525998 1915516 command_runner.go:130] > # 	"containers_oom_total",
	I1222 00:26:59.526006 1915516 command_runner.go:130] > # 	"processes_defunct",
	I1222 00:26:59.526010 1915516 command_runner.go:130] > # 	"operations_total",
	I1222 00:26:59.526014 1915516 command_runner.go:130] > # 	"operations_latency_seconds",
	I1222 00:26:59.526019 1915516 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1222 00:26:59.526023 1915516 command_runner.go:130] > # 	"operations_errors_total",
	I1222 00:26:59.526028 1915516 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1222 00:26:59.526033 1915516 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1222 00:26:59.526037 1915516 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1222 00:26:59.526041 1915516 command_runner.go:130] > # 	"image_pulls_success_total",
	I1222 00:26:59.526045 1915516 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1222 00:26:59.526050 1915516 command_runner.go:130] > # 	"containers_oom_count_total",
	I1222 00:26:59.526055 1915516 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1222 00:26:59.526059 1915516 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1222 00:26:59.526064 1915516 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1222 00:26:59.526066 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526080 1915516 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1222 00:26:59.526084 1915516 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1222 00:26:59.526089 1915516 command_runner.go:130] > # The port on which the metrics server will listen.
	I1222 00:26:59.526093 1915516 command_runner.go:130] > # metrics_port = 9090
	I1222 00:26:59.526098 1915516 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1222 00:26:59.526102 1915516 command_runner.go:130] > # metrics_socket = ""
	I1222 00:26:59.526107 1915516 command_runner.go:130] > # The certificate for the secure metrics server.
	I1222 00:26:59.526118 1915516 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1222 00:26:59.526125 1915516 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1222 00:26:59.526130 1915516 command_runner.go:130] > # certificate on any modification event.
	I1222 00:26:59.526134 1915516 command_runner.go:130] > # metrics_cert = ""
	I1222 00:26:59.526139 1915516 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1222 00:26:59.526148 1915516 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1222 00:26:59.526151 1915516 command_runner.go:130] > # metrics_key = ""
	I1222 00:26:59.526157 1915516 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1222 00:26:59.526163 1915516 command_runner.go:130] > [crio.tracing]
	I1222 00:26:59.526169 1915516 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1222 00:26:59.526177 1915516 command_runner.go:130] > # enable_tracing = false
	I1222 00:26:59.526182 1915516 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1222 00:26:59.526187 1915516 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1222 00:26:59.526197 1915516 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1222 00:26:59.526205 1915516 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1222 00:26:59.526213 1915516 command_runner.go:130] > # CRI-O NRI configuration.
	I1222 00:26:59.526218 1915516 command_runner.go:130] > [crio.nri]
	I1222 00:26:59.526222 1915516 command_runner.go:130] > # Globally enable or disable NRI.
	I1222 00:26:59.526226 1915516 command_runner.go:130] > # enable_nri = true
	I1222 00:26:59.526230 1915516 command_runner.go:130] > # NRI socket to listen on.
	I1222 00:26:59.526238 1915516 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1222 00:26:59.526242 1915516 command_runner.go:130] > # NRI plugin directory to use.
	I1222 00:26:59.526247 1915516 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1222 00:26:59.526255 1915516 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1222 00:26:59.526264 1915516 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1222 00:26:59.526270 1915516 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1222 00:26:59.526331 1915516 command_runner.go:130] > # nri_disable_connections = false
	I1222 00:26:59.526342 1915516 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1222 00:26:59.526347 1915516 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1222 00:26:59.526352 1915516 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1222 00:26:59.526359 1915516 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1222 00:26:59.526364 1915516 command_runner.go:130] > # NRI default validator configuration.
	I1222 00:26:59.526374 1915516 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1222 00:26:59.526381 1915516 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1222 00:26:59.526389 1915516 command_runner.go:130] > # can be restricted/rejected:
	I1222 00:26:59.526393 1915516 command_runner.go:130] > # - OCI hook injection
	I1222 00:26:59.526399 1915516 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1222 00:26:59.526406 1915516 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1222 00:26:59.526411 1915516 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1222 00:26:59.526416 1915516 command_runner.go:130] > # - adjustment of linux namespaces
	I1222 00:26:59.526425 1915516 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1222 00:26:59.526435 1915516 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1222 00:26:59.526442 1915516 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1222 00:26:59.526448 1915516 command_runner.go:130] > #
	I1222 00:26:59.526452 1915516 command_runner.go:130] > # [crio.nri.default_validator]
	I1222 00:26:59.526457 1915516 command_runner.go:130] > # nri_enable_default_validator = false
	I1222 00:26:59.526467 1915516 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1222 00:26:59.526474 1915516 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1222 00:26:59.526485 1915516 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1222 00:26:59.526492 1915516 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1222 00:26:59.526500 1915516 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1222 00:26:59.526505 1915516 command_runner.go:130] > # nri_validator_required_plugins = [
	I1222 00:26:59.526510 1915516 command_runner.go:130] > # ]
	I1222 00:26:59.526516 1915516 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1222 00:26:59.526525 1915516 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1222 00:26:59.526530 1915516 command_runner.go:130] > [crio.stats]
	I1222 00:26:59.526543 1915516 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1222 00:26:59.526549 1915516 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1222 00:26:59.526556 1915516 command_runner.go:130] > # stats_collection_period = 0
	I1222 00:26:59.526563 1915516 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1222 00:26:59.526573 1915516 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1222 00:26:59.526577 1915516 command_runner.go:130] > # collection_period = 0
	I1222 00:26:59.526772 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.4923315Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1222 00:26:59.526793 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492368561Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1222 00:26:59.526804 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492394456Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1222 00:26:59.526811 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492418381Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1222 00:26:59.526829 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.49249673Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:26:59.526841 1915516 command_runner.go:130] ! time="2025-12-22T00:26:59.492972301Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1222 00:26:59.526852 1915516 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1222 00:26:59.526929 1915516 cni.go:84] Creating CNI manager for ""
	I1222 00:26:59.526945 1915516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:26:59.526969 1915516 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:26:59.526995 1915516 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:26:59.527121 1915516 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:26:59.527200 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:26:59.535433 1915516 command_runner.go:130] > kubeadm
	I1222 00:26:59.535512 1915516 command_runner.go:130] > kubectl
	I1222 00:26:59.535531 1915516 command_runner.go:130] > kubelet
	I1222 00:26:59.535576 1915516 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:26:59.535650 1915516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:26:59.543039 1915516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:26:59.555537 1915516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:26:59.568459 1915516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 00:26:59.580757 1915516 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:26:59.584332 1915516 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:26:59.584456 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:26:59.724165 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:00.346235 1915516 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:27:00.346307 1915516 certs.go:195] generating shared ca certs ...
	I1222 00:27:00.346339 1915516 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:00.346533 1915516 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:27:00.346648 1915516 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:27:00.346710 1915516 certs.go:257] generating profile certs ...
	I1222 00:27:00.346876 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:27:00.347136 1915516 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:27:00.347262 1915516 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:27:00.347310 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:27:00.347347 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:27:00.347391 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:27:00.347455 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:27:00.347514 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:27:00.347563 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:27:00.347599 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:27:00.347655 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:27:00.347758 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:27:00.347824 1915516 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:27:00.347874 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:27:00.347933 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:27:00.347992 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:27:00.348063 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:27:00.348147 1915516 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:27:00.348210 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem -> /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.348254 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.348294 1915516 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.349116 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:27:00.373997 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:27:00.395587 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:27:00.415350 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:27:00.433576 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:27:00.451786 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:27:00.469999 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:27:00.488347 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:27:00.506732 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:27:00.524546 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:27:00.542404 1915516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:27:00.559977 1915516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:27:00.572924 1915516 ssh_runner.go:195] Run: openssl version
	I1222 00:27:00.579133 1915516 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:27:00.579568 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.587905 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:27:00.596398 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600098 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600361 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.600432 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:27:00.641247 1915516 command_runner.go:130] > 51391683
	I1222 00:27:00.641737 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:27:00.649412 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.657021 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:27:00.664905 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668819 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668897 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.668950 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:27:00.710718 1915516 command_runner.go:130] > 3ec20f2e
	I1222 00:27:00.711212 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:27:00.718755 1915516 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.725802 1915516 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:27:00.733002 1915516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736446 1915516 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736472 1915516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.736534 1915516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:27:00.776933 1915516 command_runner.go:130] > b5213941
	I1222 00:27:00.777458 1915516 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:27:00.784943 1915516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788657 1915516 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:27:00.788682 1915516 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:27:00.788688 1915516 command_runner.go:130] > Device: 259,1	Inode: 1585688     Links: 1
	I1222 00:27:00.788695 1915516 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:27:00.788729 1915516 command_runner.go:130] > Access: 2025-12-22 00:22:53.052243967 +0000
	I1222 00:27:00.788738 1915516 command_runner.go:130] > Modify: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788744 1915516 command_runner.go:130] > Change: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788752 1915516 command_runner.go:130] >  Birth: 2025-12-22 00:18:48.995482919 +0000
	I1222 00:27:00.788843 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:27:00.834300 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.834770 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:27:00.875032 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.875494 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:27:00.916072 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.916544 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:27:00.957168 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:00.957608 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:27:01.005250 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.005855 1915516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:27:01.046783 1915516 command_runner.go:130] > Certificate will not expire
	I1222 00:27:01.047595 1915516 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:27:01.047694 1915516 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:27:01.047794 1915516 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:27:01.074422 1915516 cri.go:96] found id: ""
	I1222 00:27:01.074566 1915516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:27:01.081706 1915516 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:27:01.081729 1915516 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:27:01.081736 1915516 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:27:01.082711 1915516 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:27:01.082752 1915516 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:27:01.082809 1915516 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:27:01.090505 1915516 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:27:01.090899 1915516 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-837733" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091003 1915516 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "functional-837733" cluster setting kubeconfig missing "functional-837733" context setting]
	I1222 00:27:01.091279 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.091694 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.091853 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.092420 1915516 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:27:01.092441 1915516 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:27:01.092469 1915516 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:27:01.092508 1915516 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:27:01.092519 1915516 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:27:01.092524 1915516 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:27:01.092821 1915516 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:27:01.103318 1915516 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:27:01.103353 1915516 kubeadm.go:602] duration metric: took 20.58853ms to restartPrimaryControlPlane
	I1222 00:27:01.103366 1915516 kubeadm.go:403] duration metric: took 55.780111ms to StartCluster
	I1222 00:27:01.103381 1915516 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.103449 1915516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.104102 1915516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:27:01.104313 1915516 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 00:27:01.104706 1915516 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:27:01.104752 1915516 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:27:01.104888 1915516 addons.go:70] Setting storage-provisioner=true in profile "functional-837733"
	I1222 00:27:01.104908 1915516 addons.go:239] Setting addon storage-provisioner=true in "functional-837733"
	I1222 00:27:01.104931 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.105709 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.105852 1915516 addons.go:70] Setting default-storageclass=true in profile "functional-837733"
	I1222 00:27:01.105876 1915516 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-837733"
	I1222 00:27:01.106124 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.111206 1915516 out.go:179] * Verifying Kubernetes components...
	I1222 00:27:01.114015 1915516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:27:01.149988 1915516 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:27:01.150406 1915516 kapi.go:59] client config for functional-837733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:27:01.150740 1915516 addons.go:239] Setting addon default-storageclass=true in "functional-837733"
	I1222 00:27:01.150781 1915516 host.go:66] Checking if "functional-837733" exists ...
	I1222 00:27:01.151633 1915516 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:27:01.160017 1915516 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:27:01.163813 1915516 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.163840 1915516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:27:01.163933 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.208933 1915516 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:01.208958 1915516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:27:01.209032 1915516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:27:01.220299 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.259895 1915516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:27:01.324483 1915516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:27:01.352138 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:01.392769 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.095697 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.095745 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095787 1915516 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.095683 1915516 node_ready.go:35] waiting up to 6m0s for node "functional-837733" to be "Ready" ...
	I1222 00:27:02.095893 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.095958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.096162 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.096189 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.096261 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.229628 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.250171 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.299403 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.303146 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.325073 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.325129 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.596516 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:02.596602 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.596960 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.700363 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:02.757664 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.757704 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:02.778883 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:02.845712 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:02.845792 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.096069 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.096495 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.596210 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:03.596328 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.596586 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.599774 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:03.608220 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:03.704291 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.704413 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:03.724087 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:03.724140 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.096645 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.096746 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.097060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.097110 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:04.395547 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:04.461482 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.461526 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:04.596939 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:04.597016 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.597354 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.908718 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:04.967177 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:04.967217 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:05.096456 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.096540 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.096895 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:05.596711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.096915 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.096990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:06.097365 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:06.122623 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:06.180609 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.181800 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 00:27:06.185073 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.244148 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:06.244195 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:06.596862 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:06.596990 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.597266 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.095976 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.096054 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:07.596061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.596391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.650649 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:07.682081 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:07.723708 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.723755 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:07.756657 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:07.756702 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:08.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.096407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.596459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:08.596512 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:09.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.500982 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:09.557414 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:09.557460 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:09.596618 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:09.596687 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.596949 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.096786 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.096870 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.407596 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:10.466808 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:10.466847 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:10.596231 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:10.596316 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:10.596666 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:11.096583 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.096658 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.596951 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:11.597033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.597361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.095930 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.096006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.096359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:12.596103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.096064 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.096469 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:13.110706 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:13.162639 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:13.165994 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:13.596604 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:13.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.597025 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.096902 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.097221 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:14.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.596331 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.007048 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:15.075847 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:15.075897 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:15.095981 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.096063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.596111 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:15.596199 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.596504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.096151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.596451 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:16.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.596873 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.096633 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.096711 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.097034 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.256498 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:17.320974 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:17.324801 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:17.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:17.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.596544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:17.596599 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.096285 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.096359 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.096666 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.596351 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:18.596417 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.596709 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.596186 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:19.596288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.596608 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:19.596658 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:20.095993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:20.596149 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.096326 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.096409 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.096724 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.596667 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:21.596736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.597043 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:21.597093 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:22.096478 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.096951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.596629 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:22.596721 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.597082 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.096795 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.096876 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.097130 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.596917 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:23.596995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.597294 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.941866 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:23.998448 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:23.998497 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:23.998521 1915516 retry.go:84] will retry after 7.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:24.096775 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.096858 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.097179 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.596921 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:24.596997 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.597304 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.096528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.596241 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:25.596313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.596669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.096357 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.096423 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.096702 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:26.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.596861 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:26.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.597278 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:27.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.096413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:28.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.596520 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:28.596577 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:29.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.096085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.096330 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:29.596121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.096205 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.096297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.096674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.321067 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:30.379433 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:30.379480 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.379505 1915516 retry.go:84] will retry after 9.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:30.596804 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:30.596878 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.597200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.597254 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:31.096701 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.096780 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.097109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.231597 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:31.293911 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:31.293951 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:31.596615 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:31.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.096660 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:32.596967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.597283 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.597352 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:33.096960 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.097032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.097391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.596091 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:33.596164 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.596230 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:34.596325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.596664 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.096343 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.096412 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.096685 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:35.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:35.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.096033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.596412 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:36.596479 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.596727 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.096555 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.096628 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.096973 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:37.097038 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.596788 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:37.596867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.095946 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.096029 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.096335 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.596466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.096281 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.096646 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:39.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.596430 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:40.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.096154 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.098739 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:40.155643 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:40.159357 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.159395 1915516 retry.go:84] will retry after 15s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:40.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:40.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.096329 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.096400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.096688 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.596638 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:41.596757 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.597079 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.597130 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:42.096986 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.097069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.101091 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1222 00:27:42.596907 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:42.596992 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.597277 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:43.596165 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.096105 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.096430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:44.096472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:44.596127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.096194 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.096288 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:45.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.596379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.096150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:46.096534 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.458265 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:27:46.518675 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:46.518729 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.518755 1915516 retry.go:84] will retry after 28.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:46.596931 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:46.597020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.597346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.096075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.596079 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:47.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.096201 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.096276 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.096625 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.096740 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:48.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.596414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:49.596137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.096214 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.096306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.596061 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:50.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.596464 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.596521 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:51.096372 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.096448 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.596622 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:51.596716 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.596979 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.096857 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.096927 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.097248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:52.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.096378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:53.096448 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:53.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.596524 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.096119 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.096565 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.596116 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:54.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.096066 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.096178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:55.096580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.200777 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:27:55.265479 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:27:55.265520 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.265546 1915516 retry.go:84] will retry after 39.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:27:55.595965 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:55.596049 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.596359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.095950 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.096027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.096297 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.596381 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:56.596461 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.596751 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.096601 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.096888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.096932 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.596637 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:57.596710 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.596964 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.096723 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.096792 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.097169 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.596957 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:58.597032 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.597316 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:27:59.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.596476 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:00.096143 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.096238 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.596214 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:00.596306 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.596559 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.097205 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.596188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:01.596261 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.596603 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.596670 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.096310 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.096394 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.096715 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.596419 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:02.596515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.596851 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.096613 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.096689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.097037 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.596789 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:03.596864 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.597113 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.597153 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.096886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.097257 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.595963 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:04.596038 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.596345 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.096027 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.596020 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:05.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.096257 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.096620 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:06.096682 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.596417 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:06.596491 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.596755 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.096458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:07.596270 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.596617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:08.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:08.596475 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:09.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.096121 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:09.596234 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:10.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.096409 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.096501 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.096849 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:11.096946 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:11.596858 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:11.596941 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.597260 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.097034 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.097128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.097525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:12.596078 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.596422 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:13.596158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:13.596572 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:14.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.096071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.096380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:14.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.678913 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:14.737274 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:14.740868 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:14.740903 1915516 retry.go:84] will retry after 44.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:28:15.096479 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.096556 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.096856 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.596004 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:15.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.596400 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.096039 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.096510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.096575 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:16.596327 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:16.596405 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.096642 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.096912 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.596695 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:17.596773 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.597104 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.096925 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.097003 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.097359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:18.097417 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:18.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:18.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.596361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.096435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.596136 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:19.596211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.596552 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.096244 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.596023 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:20.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.596453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:20.596506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:21.096297 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.096367 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.096669 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:21.596745 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:21.597117 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.096947 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.097021 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.097379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:22.596086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:22.596159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:22.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:22.596545 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:23.096015 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.096324 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:23.596056 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:23.596150 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:23.596454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.096052 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.096138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:24.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:24.596263 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:24.596536 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:24.596589 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:25.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:25.596179 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:25.596271 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:25.596578 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.096014 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.096355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:26.596544 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:26.596874 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:26.596933 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:27.096632 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.096736 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.097066 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:27.596687 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:27.596753 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:27.597008 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.096768 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.096839 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:28.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:28.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:28.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:29.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.096076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.096374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:29.096425 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:29.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:29.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:29.596557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.096284 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.096366 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.096689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:30.596008 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:30.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:30.596351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:31.096276 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.096353 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.096679 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:31.096745 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:31.596743 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:31.596816 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:31.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.096778 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.096851 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.097100 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:32.596895 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:32.596983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:32.597299 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.096044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:33.596138 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:33.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:33.596502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:33.596552 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:34.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.096436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:34.454097 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:28:34.514142 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514187 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:34.514269 1915516 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:34.596441 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:34.596529 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:34.596855 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.096655 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.096737 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.097056 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:35.596886 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:35.596959 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:35.597310 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:35.597390 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:36.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.096484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:36.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:36.596096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:36.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.096047 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:37.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:37.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:37.596568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.096447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:38.096490 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:38.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:38.596129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:38.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.096163 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.096577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:39.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:39.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:39.596431 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:40.096080 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.096167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:40.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:40.596326 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:40.596447 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:40.596781 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.096558 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.096630 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.096876 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:41.596824 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:41.596935 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:41.597258 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:42.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.096146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.097461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1222 00:28:42.097586 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:42.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:42.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:42.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.096540 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:43.596134 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:43.596208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:43.596546 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:44.596128 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:44.596215 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:44.596558 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:44.596612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:45.096183 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.096278 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.096726 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:45.596446 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:45.596527 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:45.596802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.096553 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.096624 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.096959 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:46.596832 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:46.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:46.597219 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:46.597275 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:47.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.095989 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.096242 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:47.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:47.596082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:47.596368 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.096094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.096173 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:48.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:48.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:48.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:49.096045 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.096118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.096389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:49.096433 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:49.596094 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:49.596170 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:49.596486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.096103 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.096376 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:50.596025 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:50.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:50.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:51.096359 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.096467 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.096794 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:51.096869 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:51.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:51.596678 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:51.596972 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.096756 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.096859 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.097229 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:52.595971 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:52.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:52.596373 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.096394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:53.596060 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:53.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:53.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:53.596547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:54.096081 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.096525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:54.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:54.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:54.596398 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:55.596109 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:55.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:55.596510 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:56.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.096159 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.096414 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:56.096453 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:56.596382 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:56.596455 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:56.596777 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.096597 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.096690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.097021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:57.596765 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:57.596835 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:57.597084 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:58.096904 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.096994 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.097340 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:58.097391 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:58.596044 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:58.596153 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:58.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.061108 1915516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:28:59.096537 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.096605 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.096887 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:59.132532 1915516 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138577 1915516 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:28:59.138673 1915516 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:28:59.141746 1915516 out.go:179] * Enabled addons: 
	I1222 00:28:59.145405 1915516 addons.go:530] duration metric: took 1m58.040642899s for enable addons: enabled=[]
	I1222 00:28:59.596293 1915516 type.go:165] "Request Body" body=""
	I1222 00:28:59.596374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:59.596770 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.096528 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.096670 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.097201 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:00.595928 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:00.596014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:00.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:00.596370 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:01.096378 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.096452 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.096753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:01.596603 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:01.596676 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:01.596994 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.096700 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.096774 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.097108 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:02.596912 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:02.596985 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:02.597314 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:02.597388 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:03.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:03.596014 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:03.596079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:03.596346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.096091 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.096420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:04.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:04.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:04.596416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:05.095995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:05.096466 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:05.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:05.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:05.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.096196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.096324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.096662 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:06.596476 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:06.596558 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:06.596812 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:07.096580 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.096655 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.096995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:07.097052 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:07.596669 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:07.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:07.597087 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.096888 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.096974 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.097302 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:08.595949 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:08.596080 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:08.596411 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.095985 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.096064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.096372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:09.595950 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:09.596027 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:09.596300 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:09.596339 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:10.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.096125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.096502 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:10.596205 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:10.596303 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:10.596585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.096568 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.096641 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.096925 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:11.596903 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:11.596980 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:11.597315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:11.597389 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:12.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.096485 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:12.595972 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:12.596042 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:12.596348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:13.596162 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:13.596244 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:13.596528 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:14.096197 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:14.096594 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:14.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:14.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:14.596443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.096585 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:15.596256 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:15.596324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:15.596577 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.096114 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:16.596455 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:16.596531 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:16.596865 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:16.596917 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.096729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:17.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:17.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:17.596427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.096144 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:18.596236 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:18.596315 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:18.596584 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:19.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.096451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:19.096504 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:19.596189 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:19.596268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:19.596595 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.096286 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.096362 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:20.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:20.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:20.596480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:21.096400 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.096480 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.096769 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:21.096815 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:21.596733 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:21.596825 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:21.597125 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.096830 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.096913 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.097244 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:22.596062 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:22.596138 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:22.596522 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.095999 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:23.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:23.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:23.596429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:23.596485 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:24.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.096081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:24.596151 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:24.596232 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:24.596506 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.096466 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:25.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:25.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:25.596494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:25.596557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:26.096037 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.096119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.096391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:26.596314 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:26.596400 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:26.596729 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.096554 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.096633 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.097006 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:27.596775 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:27.596844 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:27.597098 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:27.597137 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:28.096938 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.097014 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.097401 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:28.595911 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:28.595983 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:28.596321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.095997 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.096068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.096361 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:29.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:29.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:29.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:30.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.096489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:30.096547 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:30.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:30.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:30.596327 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.096344 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.096420 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.096776 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:31.596746 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:31.596817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:31.597107 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:32.096894 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.096967 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.097237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:32.097281 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:32.596869 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:32.596969 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:32.597275 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.096111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:33.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:33.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:33.596315 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.096102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.096423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:34.596130 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:34.596210 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:34.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:34.596568 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:35.096117 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.096195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:35.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:35.596074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:35.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.096130 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:36.596449 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:36.596524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:36.596791 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:36.596841 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:37.096628 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.096714 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.097074 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:37.596870 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:37.596949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:37.597264 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.096987 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.097079 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.097412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:38.596121 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:38.596255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:38.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:39.096335 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.096415 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.096735 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:39.096791 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:39.596011 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:39.596081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:39.596386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.096071 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.096512 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:40.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:40.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:40.596469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.096427 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.096493 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.096765 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:41.596776 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:41.596860 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:41.597207 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:41.597262 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:42.097196 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.097290 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.097753 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:42.596494 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:42.596559 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:42.596818 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.096412 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.096485 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.096813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:43.596584 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:43.596660 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:43.596988 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:44.096629 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.096696 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.096962 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:44.097007 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:44.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:44.596896 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:44.597195 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.097028 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.097117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.097583 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:45.596273 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:45.596360 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:45.596616 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.096043 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:46.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:46.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:46.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:46.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:47.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.096065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.096384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:47.596069 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:47.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:47.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.096143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.096477 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:48.596040 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:48.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:48.596372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:49.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:49.096506 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:49.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:49.596446 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.096191 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:50.596046 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:50.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:50.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:51.096262 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.096340 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.096671 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:51.096730 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:51.596674 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:51.596745 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:51.597007 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.096802 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.096883 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.097238 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:52.595995 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:52.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:52.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:53.596015 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:53.596097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:53.596419 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:53.596472 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:54.096190 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.096273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.096617 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:54.595974 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:54.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:54.596313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.096097 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.096461 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:55.596182 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:55.596258 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:55.596563 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:55.596609 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:56.096131 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.096478 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:56.596450 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:56.596522 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:56.596813 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.096065 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.096156 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.096459 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:57.596016 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:57.596086 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:57.596343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:58.096113 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.096201 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.096499 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:29:58.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:29:58.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:58.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:58.596448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.096348 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:29:59.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:29:59.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:29:59.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:00.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.096548 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:00.096612 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:00.596019 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:00.596104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:00.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.096631 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.096833 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:01.596402 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:01.596486 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:01.596923 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:02.096741 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.096817 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.097099 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:02.097145 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:02.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:02.596554 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:02.596891 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.096678 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.096749 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.097049 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:03.596811 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:03.596893 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:03.597160 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:04.096926 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.097001 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.097385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:04.097441 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:04.595979 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:04.596053 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:04.596406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:05.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:05.596102 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:05.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.096060 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.096136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:06.596009 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:06.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:06.596350 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:06.596396 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:07.096031 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.096454 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:07.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:07.596115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:07.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.096165 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.096236 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:08.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:08.596124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:08.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:08.596499 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:09.096002 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.096074 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.096386 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:09.596090 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:09.596181 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:09.596516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.096073 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.096161 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:10.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:10.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:10.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:11.096517 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.096604 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.097214 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:11.097278 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:11.596969 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:11.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:11.597372 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.096147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:12.596282 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:12.596358 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:12.596621 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.096032 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.096105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.096445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:13.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:13.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:13.596479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:13.596523 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:14.096010 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.096415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:14.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:14.596185 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:14.596519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.096074 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.096501 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:15.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:15.596093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:15.596363 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:16.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.096171 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.096570 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:16.096639 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:16.596380 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:16.596453 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:16.596745 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.096410 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.096475 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.096802 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:17.596630 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:17.596731 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:17.597138 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:18.096998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.097081 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.097420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:18.097478 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:18.596096 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:18.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:18.596413 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.096545 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:19.596039 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:19.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.096076 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.096152 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.096434 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:20.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:20.596089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:20.596441 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:20.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:21.096445 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.096524 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.096840 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:21.596761 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:21.596832 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:21.597105 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.096883 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.096964 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.097271 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:22.596982 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:22.597056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:22.597440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:22.597498 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:23.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.096337 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:23.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.096040 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.096473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:24.596157 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:24.596226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:24.596496 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:25.096051 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.096490 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:25.096549 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:25.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:25.596100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:25.596424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.096120 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.096204 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.096534 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:26.596472 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:26.596546 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:26.596888 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:27.096591 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.096677 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.097053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:27.097108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:27.596848 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:27.596955 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:27.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:28.596037 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:28.596136 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:28.596475 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.096142 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.096209 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.096456 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:29.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:29.596123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:29.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:29.596527 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:30.096116 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:30.596200 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:30.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:30.596525 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.096447 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.096541 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:31.596722 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:31.596793 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:31.597128 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:31.597181 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:32.096921 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.096995 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:32.596000 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:32.596073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:32.596407 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.096012 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.096087 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.096402 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:33.596177 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:33.596451 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:34.096133 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.096208 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.096544 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:34.096604 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:34.596306 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:34.596398 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:34.596763 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.096549 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.096621 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.096956 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:35.596693 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:35.596770 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:35.597109 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:36.096908 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.096988 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.097351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:36.097407 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:36.596345 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:36.596419 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:36.596680 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.096025 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.096427 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:37.596033 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:37.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:37.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.096125 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.096197 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:38.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:38.596222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:38.596567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:38.596620 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:39.096295 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.096370 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:39.595993 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:39.596068 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:39.596381 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.096092 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.096192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.096557 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:40.596269 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:40.596348 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:40.596667 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:40.596717 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:41.096474 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.096552 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.096825 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:41.596734 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:41.596808 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:41.597164 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.096930 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.097022 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:42.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:42.596116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:42.596367 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:43.096013 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.096094 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:43.096482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:43.596115 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:43.596196 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:43.596500 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.095990 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.096056 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.096329 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:44.595987 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:44.596060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:44.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:45.096089 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.096182 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.096632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:45.096698 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:45.596155 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:45.596221 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:45.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.096083 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.096468 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:46.596017 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:46.596090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:46.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.096110 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.096527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:47.596194 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:47.596265 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:47.596618 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:47.596676 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:48.096061 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.096139 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:48.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:48.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:48.596328 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.096096 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.096465 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:49.596175 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:49.596269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:49.596561 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:50.095991 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.096061 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.096371 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:50.096418 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:50.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:50.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:50.596449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.096398 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.096473 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.096792 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:51.596692 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:51.596763 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:51.597029 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:52.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.096954 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.097313 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:52.097385 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:52.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:52.596133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:52.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.095986 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.096060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.096364 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:53.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:53.596117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:53.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.096116 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.096450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:54.595998 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:54.596076 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:54.596389 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:54.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:55.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.096554 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:55.596065 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:55.596140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:55.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.096255 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:56.596413 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:56.596495 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:56.597103 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:56.597155 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:57.096854 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.096939 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.097279 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:57.596919 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:57.596998 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:57.597248 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.096981 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.097060 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.097406 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:58.595936 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:58.596020 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:58.596366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:30:59.096048 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:30:59.096443 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:30:59.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:30:59.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:30:59.596437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.112227 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.112324 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.112677 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:00.596304 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:00.596376 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:00.596639 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:01.096566 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.096650 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.096999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:01.097053 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:01.596950 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:01.597041 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:01.597377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.096463 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.096586 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.096933 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:02.596593 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:02.596668 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:02.597021 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:03.096676 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.096752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.097161 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:03.097215 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:03.596799 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:03.596879 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:03.597210 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.095922 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.095999 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:04.596068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:04.596146 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:04.596471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.096006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.096092 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.096369 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:05.596027 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:05.596106 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:05.596440 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:05.596494 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:06.096082 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.096460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:06.596324 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:06.596435 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:06.596689 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.096113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:07.596147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:07.596220 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:07.596550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:07.596603 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:08.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.096360 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:08.596054 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:08.596151 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:08.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.096188 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.096269 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.096598 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:09.596123 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:09.596195 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:09.596472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:10.096085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.096519 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:10.096576 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:10.596225 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:10.596297 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:10.596606 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.096706 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.097010 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:11.596932 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:11.597006 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:11.597391 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.096467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:12.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:12.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:12.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:12.596356 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:13.096021 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.096444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:13.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:13.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:13.596470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.096148 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.096223 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.096479 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:14.596057 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:14.596128 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:14.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:14.596507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:15.096068 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.096144 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.096443 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:15.596144 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:15.596224 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:15.596509 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.096129 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.096482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:16.596457 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:16.596533 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:16.596848 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:16.596913 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:17.096611 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.096699 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.097003 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:17.596626 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:17.596727 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:17.597060 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.096680 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.096762 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.097085 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:18.596689 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:18.596752 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:18.597053 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:18.597096 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:19.096874 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.096949 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.097291 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:19.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:19.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:19.596452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.096205 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.096486 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:20.596026 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:20.596098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:20.596435 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:21.096417 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.096492 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.096805 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:21.096849 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:21.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:21.596686 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:21.596951 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.096805 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.096887 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.097262 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:22.596005 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:22.596085 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:22.596438 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.096004 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.096077 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.096377 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:23.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:23.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:23.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:23.596497 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:24.096207 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.096291 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.096641 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:24.595992 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:24.596065 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:24.596365 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.096084 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.096162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.096470 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:25.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:25.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:25.596467 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:25.596528 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:26.096186 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.096259 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:26.596641 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:26.596715 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:26.597033 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.096832 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.096920 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.097265 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:27.595994 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:27.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:27.596378 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:28.096050 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.096126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.096457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:28.096502 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:28.596035 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:28.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:28.596420 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.095982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.096048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.096296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:29.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:29.596109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:29.596445 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:30.096147 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.096226 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.096531 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:30.096584 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:30.595996 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:30.596062 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:30.596308 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.096270 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:31.596652 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:31.596724 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:31.597083 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:32.096810 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.096900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.097155 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:32.097194 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:32.595938 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:32.596018 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:32.596399 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.096098 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.096179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.096514 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:33.596107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:33.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.096053 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.096448 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:34.596176 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:34.596250 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:34.596573 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:34.596630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:35.096275 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.096351 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.096626 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:35.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:35.596118 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:35.596444 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.096017 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.096089 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.096417 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:36.595986 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:36.596055 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:36.596358 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:37.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.096090 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:37.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:37.596024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:37.596105 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:37.596436 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:38.596034 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:38.596111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:38.596458 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.096099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.096428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:39.596099 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:39.596167 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:39.596423 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:39.596461 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:40.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.096122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:40.596239 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:40.596317 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:40.596658 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.096446 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.096515 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.096773 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:41.596719 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:41.596796 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:41.597144 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:41.597198 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:42.095939 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.096035 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.096481 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:42.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:42.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:42.596396 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.096086 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.096163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.096515 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:43.596213 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:43.596294 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:43.596619 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:44.096156 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.096231 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.096508 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:44.096557 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:44.596052 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:44.596131 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:44.596473 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.096206 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.096300 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.096704 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:45.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:45.596071 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:45.596317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.096028 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.096124 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.096449 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:46.596153 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:46.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:46.596566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:46.596617 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:47.095989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.096301 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:47.595982 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:47.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:47.596394 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.096107 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.096200 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.096526 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:48.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:48.596058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:48.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:49.096029 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.096455 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:49.096507 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:49.596031 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:49.596112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:49.596450 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.096150 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.096219 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.096488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:50.596217 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:50.596292 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:50.596622 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:51.096688 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.096769 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.097192 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:51.097261 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:51.595908 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:51.595986 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:51.596268 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.095979 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.096057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:52.596180 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:52.596273 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:52.596678 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.096355 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.096427 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.096682 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:53.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:53.596120 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:53.596491 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:53.596544 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:54.096049 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.096127 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.096480 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:54.596158 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:54.596225 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:54.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.096042 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.096117 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:55.596055 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:55.596132 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:55.596488 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:56.096195 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.096268 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.096566 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:56.096624 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:56.596612 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:56.596683 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:56.597015 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.096792 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.096867 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.097193 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:57.595906 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:57.595972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:57.596218 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:58.097085 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.097172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.097567 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:31:58.097619 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:31:58.596165 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:58.596241 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:58.596588 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.096257 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.096325 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.096568 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:31:59.596240 1915516 type.go:165] "Request Body" body=""
	I1222 00:31:59.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:31:59.596636 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.096136 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.096222 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.096550 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:00.596394 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:00.596465 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:00.596730 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:00.596772 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:01.096774 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.096885 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.097366 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:01.595973 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:01.596048 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:01.597013 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.096796 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.096863 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.097153 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:02.596892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:02.596965 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:02.597355 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:02.597411 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:03.095937 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.096024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.096410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:03.596093 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:03.596163 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:03.596410 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.096104 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.096429 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:04.596053 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:04.596135 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:04.596474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:05.096026 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.096107 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.096424 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:05.096479 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:05.596083 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:05.596157 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:05.596482 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.096062 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.096137 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.096505 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:06.596315 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:06.596381 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:06.596649 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:07.096321 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.096392 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.096733 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:07.096788 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:07.596571 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:07.596648 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:07.596948 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.096706 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.096778 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.097061 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:08.596819 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:08.596900 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:08.597237 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.095966 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.096044 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.096416 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:09.596095 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:09.596162 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:09.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:09.596474 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:10.096007 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.096082 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.096404 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:10.596067 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:10.596141 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:10.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.096744 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.096823 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.097172 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:11.595924 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:11.596004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:11.596347 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:12.096070 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.096155 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.096487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:12.096542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:12.595989 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:12.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:12.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.096101 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.096172 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.096516 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:13.596092 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:13.596174 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:13.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.096106 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.096437 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:14.596049 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:14.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:14.596489 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:14.596542 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:15.096232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.096313 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.096651 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:15.596333 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:15.596407 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:15.596674 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.096354 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.096430 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.096766 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:16.596753 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:16.596828 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:16.597170 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:16.597226 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:17.096933 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.097004 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.097253 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:17.595951 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:17.596024 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:17.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.095977 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.096058 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.096393 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:18.596001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:18.596069 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:18.596412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:19.096024 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.096098 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.096412 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:19.096467 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:19.596050 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:19.596126 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:19.596462 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.096172 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.096518 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:20.596013 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:20.596088 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:20.596415 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:21.096789 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.096865 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.097200 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:21.097256 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:21.596077 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:21.596143 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:21.596421 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.096046 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.096123 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.096469 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:22.596043 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:22.596125 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:22.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.096346 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:23.595990 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:23.596063 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:23.596380 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:23.596435 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:24.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.096108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.096472 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:24.596160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:24.596230 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:24.596484 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.096036 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.096112 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.096452 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:25.596159 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:25.596240 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:25.596576 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:25.596629 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:26.096303 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.096374 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.096637 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:26.596570 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:26.596649 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:26.596995 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.096624 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.096704 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.097042 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:27.596727 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:27.596794 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:27.597086 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:27.597135 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:28.096901 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.096979 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.097317 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:28.596045 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:28.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:28.596463 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.096139 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.096211 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.096453 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:29.596030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:29.596108 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:29.596403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:30.096097 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.096176 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.096532 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:30.096630 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:30.596085 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:30.596179 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:30.596498 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.096503 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.096583 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.096898 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:31.596830 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:31.596908 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:31.597245 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.095958 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.096033 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.096334 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:32.596022 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:32.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:32.596428 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:32.596482 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:33.096166 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.096574 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:33.596105 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:33.596178 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:33.596447 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.096079 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.096158 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.096492 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:34.596212 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:34.596289 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:34.596632 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:34.596687 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:35.096345 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.096429 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.096712 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:35.596048 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:35.596142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:35.596460 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.096160 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.096237 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.096579 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:36.596663 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:36.596739 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:36.597059 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:36.597108 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:37.096872 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.096958 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.097285 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:37.596038 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:37.596110 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:37.596433 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.096041 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.096109 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.096379 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:38.596032 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:38.596113 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:38.596457 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:39.096175 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.096249 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:39.096631 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:39.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:39.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:39.596384 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.096140 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.096474 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:40.596047 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:40.596119 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:40.596442 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:41.096370 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.096433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.096710 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:41.096759 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:41.596613 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:41.596690 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:41.596999 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.096724 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.096822 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.097289 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:42.596970 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:42.597040 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:42.597359 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.096020 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.096093 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.096426 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:43.595988 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:43.596067 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:43.596385 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:43.596439 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:44.096016 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.096084 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.096351 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:44.596042 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:44.596122 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:44.596476 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.096209 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.096309 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.096703 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:45.595999 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:45.596064 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:45.596318 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:46.096056 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.096142 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.096535 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:46.096587 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:46.596358 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:46.596433 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:46.596764 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.096505 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.096569 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.096863 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:47.596617 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:47.596689 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:47.597077 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:48.096892 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.096972 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.097296 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:48.097376 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:48.596003 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:48.596075 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:48.596383 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.096001 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.096073 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.096409 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:49.596021 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:49.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:49.596390 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.096018 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.096100 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.096418 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:50.595985 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:50.596057 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:50.596374 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:50.596428 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:51.096388 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.096466 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.096827 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:51.596847 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:51.596923 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:51.597181 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.095961 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.096037 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.096403 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:52.596120 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:52.596192 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:52.596527 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:52.596580 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:53.096225 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.096307 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.096575 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:53.596232 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:53.596314 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:53.596643 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.096242 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.096330 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.096701 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:54.596365 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:54.596434 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:54.596698 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:54.596749 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:55.096058 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.096133 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.096494 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:55.596071 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:55.596147 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:55.596487 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.096173 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.096243 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.096513 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:56.596488 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:56.596560 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:56.596914 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:56.596966 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:57.096719 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.096795 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.097343 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:57.595997 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:57.596070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:57.596325 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.096030 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.096115 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.096471 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:58.596028 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:58.596101 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:58.596425 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:32:59.096000 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.096070 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.096321 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:32:59.096369 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:32:59.596006 1915516 type.go:165] "Request Body" body=""
	I1222 00:32:59.596099 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:32:59.596430 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.096151 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.096235 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.096597 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:00.596299 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:00.596383 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:00.596831 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:01.096422 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.096517 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.096896 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:33:01.096952 1915516 node_ready.go:55] error getting node "functional-837733" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-837733": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:33:01.596985 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:01.597111 1915516 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-837733" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:33:01.597684 1915516 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:33:02.096224 1915516 type.go:165] "Request Body" body=""
	I1222 00:33:02.096288 1915516 node_ready.go:38] duration metric: took 6m0.000473787s for node "functional-837733" to be "Ready" ...
	I1222 00:33:02.099371 1915516 out.go:203] 
	W1222 00:33:02.102245 1915516 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:33:02.102270 1915516 out.go:285] * 
	W1222 00:33:02.104397 1915516 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:33:02.107416 1915516 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:33:10 functional-837733 crio[5399]: time="2025-12-22T00:33:10.617626659Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c6f1acae-c477-41f7-830a-ff01af132bb0 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.639794438Z" level=info msg="Checking image status: minikube-local-cache-test:functional-837733" id=0783c1f4-6d62-42d6-aa6c-bad49863effd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.639986293Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.640030033Z" level=info msg="Image minikube-local-cache-test:functional-837733 not found" id=0783c1f4-6d62-42d6-aa6c-bad49863effd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.640102474Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-837733 found" id=0783c1f4-6d62-42d6-aa6c-bad49863effd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.664189211Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-837733" id=416451d8-14da-48f5-a344-4f368ea7fddb name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.664330893Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-837733 not found" id=416451d8-14da-48f5-a344-4f368ea7fddb name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.664372795Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-837733 found" id=416451d8-14da-48f5-a344-4f368ea7fddb name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.688241356Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-837733" id=990cb3df-5cf2-46a3-b39f-4a45bc4e5dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.688386328Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-837733 not found" id=990cb3df-5cf2-46a3-b39f-4a45bc4e5dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:11 functional-837733 crio[5399]: time="2025-12-22T00:33:11.688425301Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-837733 found" id=990cb3df-5cf2-46a3-b39f-4a45bc4e5dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:12 functional-837733 crio[5399]: time="2025-12-22T00:33:12.63936917Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=411a92f8-552c-4bf1-9ee3-18d2f051d870 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:12 functional-837733 crio[5399]: time="2025-12-22T00:33:12.956776534Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=07f64d6a-449c-4e73-a331-78dea6e21094 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:12 functional-837733 crio[5399]: time="2025-12-22T00:33:12.9569554Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=07f64d6a-449c-4e73-a331-78dea6e21094 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:12 functional-837733 crio[5399]: time="2025-12-22T00:33:12.956994636Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=07f64d6a-449c-4e73-a331-78dea6e21094 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.487551834Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=80a9bc21-d691-40a7-9ef4-5dacc774178d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.48768056Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=80a9bc21-d691-40a7-9ef4-5dacc774178d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.487715636Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=80a9bc21-d691-40a7-9ef4-5dacc774178d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.51275391Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5c484a81-d278-42e3-bb02-8eae07eb5ee3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.512921051Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5c484a81-d278-42e3-bb02-8eae07eb5ee3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.512970363Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5c484a81-d278-42e3-bb02-8eae07eb5ee3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.539390396Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7954635d-1ee0-4ffa-a86e-bef1b579fc27 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.539565775Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7954635d-1ee0-4ffa-a86e-bef1b579fc27 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:13 functional-837733 crio[5399]: time="2025-12-22T00:33:13.539607284Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7954635d-1ee0-4ffa-a86e-bef1b579fc27 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:33:14 functional-837733 crio[5399]: time="2025-12-22T00:33:14.061121527Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=766679b3-3967-49ff-ae5c-f74ba52d3401 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:33:18.088940    9575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:18.089498    9575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:18.090973    9575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:18.091382    9575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:33:18.092808    9575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:33:18 up 1 day,  7:15,  0 user,  load average: 0.61, 0.33, 0.79
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:33:15 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:16 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 22 00:33:16 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:16 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:16 functional-837733 kubelet[9450]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:16 functional-837733 kubelet[9450]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:16 functional-837733 kubelet[9450]: E1222 00:33:16.426454    9450 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:16 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:16 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:17 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 22 00:33:17 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:17 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:17 functional-837733 kubelet[9484]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:17 functional-837733 kubelet[9484]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:17 functional-837733 kubelet[9484]: E1222 00:33:17.121176    9484 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:17 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:17 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:33:17 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 22 00:33:17 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:17 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:33:17 functional-837733 kubelet[9532]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:17 functional-837733 kubelet[9532]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:33:17 functional-837733 kubelet[9532]: E1222 00:33:17.911840    9532 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:33:17 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:33:17 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (337.082778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (735.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-837733 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1222 00:36:25.758911 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:37:43.835852 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:39:06.887765 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:41:25.758828 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:42:43.841534 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-837733 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.238577714s)

                                                
                                                
-- stdout --
	* [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00111221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-837733 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.240152985s for "functional-837733" cluster.
I1222 00:45:32.315629 1881219 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (293.745578ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-885873 image ls --format table --alsologtostderr                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr                                          │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ delete         │ -p functional-885873                                                                                                                            │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ start          │ -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ start          │ -p functional-837733 --alsologtostderr -v=8                                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:26 UTC │                     │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:latest                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add minikube-local-cache-test:functional-837733                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache delete minikube-local-cache-test:functional-837733                                                                      │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl images                                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ cache          │ functional-837733 cache reload                                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ kubectl        │ functional-837733 kubectl -- --context functional-837733 get pods                                                                               │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ start          │ -p functional-837733 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:33:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:33:19.123176 1921402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:33:19.123288 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123297 1921402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:33:19.123301 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123533 1921402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:33:19.123864 1921402 out.go:368] Setting JSON to false
	I1222 00:33:19.124702 1921402 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112549,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:33:19.124753 1921402 start.go:143] virtualization:  
	I1222 00:33:19.129939 1921402 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:33:19.132820 1921402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:33:19.132934 1921402 notify.go:221] Checking for updates...
	I1222 00:33:19.138397 1921402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:33:19.141280 1921402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:33:19.144132 1921402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:33:19.147015 1921402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:33:19.149703 1921402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:33:19.152994 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:19.153132 1921402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:33:19.180144 1921402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:33:19.180247 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.239273 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.229718138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.239382 1921402 docker.go:319] overlay module found
	I1222 00:33:19.242404 1921402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:33:19.245466 1921402 start.go:309] selected driver: docker
	I1222 00:33:19.245476 1921402 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.245601 1921402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:33:19.245707 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.303768 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.294614837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.304252 1921402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:33:19.304275 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:19.304327 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:19.304366 1921402 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.307509 1921402 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:33:19.310426 1921402 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:33:19.313438 1921402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:33:19.316296 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:19.316334 1921402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:33:19.316337 1921402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:33:19.316345 1921402 cache.go:65] Caching tarball of preloaded images
	I1222 00:33:19.316452 1921402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:33:19.316461 1921402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:33:19.316560 1921402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:33:19.335533 1921402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:33:19.335544 1921402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:33:19.335564 1921402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:33:19.335600 1921402 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:33:19.335668 1921402 start.go:364] duration metric: took 47.728µs to acquireMachinesLock for "functional-837733"
	I1222 00:33:19.335688 1921402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:33:19.335699 1921402 fix.go:54] fixHost starting: 
	I1222 00:33:19.335947 1921402 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:33:19.365010 1921402 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:33:19.365031 1921402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:33:19.368348 1921402 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:33:19.368376 1921402 machine.go:94] provisionDockerMachine start ...
	I1222 00:33:19.368449 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.391712 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.392069 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.392078 1921402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:33:19.528918 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.528932 1921402 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:33:19.529008 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.546388 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.546693 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.546705 1921402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:33:19.690177 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.690264 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.708307 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.708609 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.708623 1921402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:33:19.841808 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:33:19.841826 1921402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:33:19.841843 1921402 ubuntu.go:190] setting up certificates
	I1222 00:33:19.841860 1921402 provision.go:84] configureAuth start
	I1222 00:33:19.841918 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:19.861819 1921402 provision.go:143] copyHostCerts
	I1222 00:33:19.861874 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:33:19.861889 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:33:19.861964 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:33:19.862062 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:33:19.862066 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:33:19.862089 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:33:19.862134 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:33:19.862137 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:33:19.862158 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:33:19.862198 1921402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:33:19.952084 1921402 provision.go:177] copyRemoteCerts
	I1222 00:33:19.952138 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:33:19.952182 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.970262 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.085780 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:33:20.107727 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:33:20.128189 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:33:20.148142 1921402 provision.go:87] duration metric: took 306.259546ms to configureAuth
	I1222 00:33:20.148170 1921402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:33:20.148393 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:20.148508 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.170002 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:20.170298 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:20.170316 1921402 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:33:20.512165 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:33:20.512178 1921402 machine.go:97] duration metric: took 1.143796307s to provisionDockerMachine
	I1222 00:33:20.512211 1921402 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:33:20.512232 1921402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:33:20.512310 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:33:20.512363 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.530913 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.629793 1921402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:33:20.633223 1921402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:33:20.633240 1921402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:33:20.633250 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:33:20.633308 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:33:20.633414 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:33:20.633491 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:33:20.633538 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:33:20.641126 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:20.658985 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:33:20.677182 1921402 start.go:296] duration metric: took 164.956569ms for postStartSetup
	I1222 00:33:20.677276 1921402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:33:20.677314 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.694936 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.786654 1921402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:33:20.791424 1921402 fix.go:56] duration metric: took 1.455724646s for fixHost
	I1222 00:33:20.791440 1921402 start.go:83] releasing machines lock for "functional-837733", held for 1.455763874s
	I1222 00:33:20.791545 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:20.808426 1921402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:33:20.808469 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.808475 1921402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:33:20.808537 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.830636 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.834199 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.933012 1921402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:33:21.025436 1921402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:33:21.067278 1921402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:33:21.071776 1921402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:33:21.071868 1921402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:33:21.079967 1921402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:33:21.079982 1921402 start.go:496] detecting cgroup driver to use...
	I1222 00:33:21.080013 1921402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:33:21.080062 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:33:21.095680 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:33:21.108798 1921402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:33:21.108852 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:33:21.124535 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:33:21.138031 1921402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:33:21.254740 1921402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:33:21.374618 1921402 docker.go:234] disabling docker service ...
	I1222 00:33:21.374676 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:33:21.389413 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:33:21.402447 1921402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:33:21.520019 1921402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:33:21.643581 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:33:21.657853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:33:21.671138 1921402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:33:21.671202 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.680093 1921402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:33:21.680158 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.688698 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.697375 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.705901 1921402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:33:21.713923 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.722872 1921402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.731197 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.739610 1921402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:33:21.747277 1921402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:33:21.754578 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:21.862305 1921402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:33:22.033055 1921402 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:33:22.033138 1921402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:33:22.037243 1921402 start.go:564] Will wait 60s for crictl version
	I1222 00:33:22.037298 1921402 ssh_runner.go:195] Run: which crictl
	I1222 00:33:22.040800 1921402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:33:22.068746 1921402 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:33:22.068856 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.098980 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.132885 1921402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:33:22.135905 1921402 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:33:22.151401 1921402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:33:22.158212 1921402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:33:22.160827 1921402 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:33:22.160965 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:22.161027 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.202970 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.202982 1921402 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:33:22.203036 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.229110 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.229123 1921402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:33:22.229129 1921402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:33:22.229228 1921402 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:33:22.229311 1921402 ssh_runner.go:195] Run: crio config
	I1222 00:33:22.300174 1921402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:33:22.300213 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:22.300224 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:22.300232 1921402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:33:22.300257 1921402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:33:22.300399 1921402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:33:22.300509 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:33:22.308005 1921402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:33:22.308080 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:33:22.315446 1921402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:33:22.327549 1921402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:33:22.340301 1921402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1222 00:33:22.356214 1921402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:33:22.360143 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:22.489828 1921402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:33:23.096445 1921402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:33:23.096455 1921402 certs.go:195] generating shared ca certs ...
	I1222 00:33:23.096469 1921402 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:33:23.096601 1921402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:33:23.096676 1921402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:33:23.096682 1921402 certs.go:257] generating profile certs ...
	I1222 00:33:23.096765 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:33:23.096814 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:33:23.096847 1921402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:33:23.097000 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:33:23.097032 1921402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:33:23.097039 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:33:23.097063 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:33:23.097085 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:33:23.097109 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:33:23.097150 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:23.097934 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:33:23.122733 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:33:23.143796 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:33:23.165783 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:33:23.184349 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:33:23.201211 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:33:23.218537 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:33:23.235950 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:33:23.253260 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:33:23.270022 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:33:23.287385 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:33:23.304167 1921402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:33:23.316581 1921402 ssh_runner.go:195] Run: openssl version
	I1222 00:33:23.322651 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.329731 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:33:23.336799 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340358 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340410 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.381351 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:33:23.388858 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.396030 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:33:23.403636 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407354 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407408 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.449165 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:33:23.456643 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.463729 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:33:23.471167 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474893 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474950 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.516093 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:33:23.523732 1921402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:33:23.527686 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:33:23.568782 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:33:23.609619 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:33:23.650314 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:33:23.696111 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:33:23.737413 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:33:23.778391 1921402 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:23.778492 1921402 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:33:23.778592 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.806800 1921402 cri.go:96] found id: ""
	I1222 00:33:23.806880 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:33:23.814609 1921402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:33:23.814618 1921402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:33:23.814668 1921402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:33:23.821980 1921402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.822486 1921402 kubeconfig.go:125] found "functional-837733" server: "https://192.168.49.2:8441"
	I1222 00:33:23.823779 1921402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:33:23.831691 1921402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:18:48.647493207 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:33:22.349448241 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:33:23.831708 1921402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:33:23.831719 1921402 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1222 00:33:23.831773 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.866496 1921402 cri.go:96] found id: ""
	I1222 00:33:23.866558 1921402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:33:23.889134 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:33:23.897048 1921402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 22 00:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 00:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:22 /etc/kubernetes/scheduler.conf
	
	I1222 00:33:23.897106 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:33:23.904740 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:33:23.912683 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.912741 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:33:23.920107 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.927907 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.927961 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.935547 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:33:23.943201 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.943254 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:33:23.950967 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:33:23.959359 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:24.006017 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.352657 1921402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346615684s)
	I1222 00:33:25.352716 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.546706 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.608812 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.656813 1921402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:33:25.656915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.157087 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.657071 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.157877 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.657904 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.657224 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.157645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.657086 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.157778 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.657101 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.657140 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.157596 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.657091 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.157117 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.657826 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.157144 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.657188 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.157633 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.657483 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.157012 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.657072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.657088 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.157578 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.157973 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.657614 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.157096 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.657134 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.157828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.657830 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.157616 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.657109 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.157582 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.657114 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.657017 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.157935 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.657589 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.157909 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.657385 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.157576 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.657671 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.657099 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.157849 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.657233 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.157065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.657643 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.157064 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.657808 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.157659 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.657443 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.157202 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.657314 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.157042 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.657861 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.157451 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.657599 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.157067 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.657586 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.157353 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.656976 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.181504 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.657937 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.657874 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.157195 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.657832 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.157502 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.657928 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.158003 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.657704 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.157139 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.657745 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.157058 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.657102 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.157987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.658025 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.157337 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.656989 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.157695 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.657652 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.157707 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.657722 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.157334 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.657276 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.157900 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.657113 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.157804 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.657609 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.157350 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.657597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.157180 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.658004 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.157770 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.657721 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.157789 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.657835 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.657645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.157603 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.657417 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.157127 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.157737 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.657060 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.157985 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.657010 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.657970 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.657732 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.157680 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.657000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:25.657101 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:25.682430 1921402 cri.go:96] found id: ""
	I1222 00:34:25.682444 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.682451 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:25.682457 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:25.682521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:25.708391 1921402 cri.go:96] found id: ""
	I1222 00:34:25.708406 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.708414 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:25.708420 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:25.708495 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:25.735246 1921402 cri.go:96] found id: ""
	I1222 00:34:25.735261 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.735268 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:25.735273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:25.735330 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:25.764584 1921402 cri.go:96] found id: ""
	I1222 00:34:25.764599 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.764606 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:25.764611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:25.764672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:25.790407 1921402 cri.go:96] found id: ""
	I1222 00:34:25.790421 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.790428 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:25.790434 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:25.790492 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:25.816256 1921402 cri.go:96] found id: ""
	I1222 00:34:25.816270 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.816278 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:25.816283 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:25.816345 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:25.842813 1921402 cri.go:96] found id: ""
	I1222 00:34:25.842828 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.842835 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:25.842843 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:25.842853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:25.914526 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:25.914546 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:25.948008 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:25.948024 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:26.016574 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:26.016596 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:26.032648 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:26.032666 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:26.102628 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.604431 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:28.618535 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:28.618599 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:28.647904 1921402 cri.go:96] found id: ""
	I1222 00:34:28.647920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.647927 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:28.647933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:28.647987 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:28.674397 1921402 cri.go:96] found id: ""
	I1222 00:34:28.674411 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.674418 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:28.674423 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:28.674481 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:28.704093 1921402 cri.go:96] found id: ""
	I1222 00:34:28.704107 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.704114 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:28.704129 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:28.704187 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:28.729449 1921402 cri.go:96] found id: ""
	I1222 00:34:28.729463 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.729470 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:28.729475 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:28.729535 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:28.754277 1921402 cri.go:96] found id: ""
	I1222 00:34:28.754292 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.754300 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:28.754305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:28.754366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:28.779419 1921402 cri.go:96] found id: ""
	I1222 00:34:28.779445 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.779453 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:28.779459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:28.779526 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:28.804826 1921402 cri.go:96] found id: ""
	I1222 00:34:28.804841 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.804848 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:28.804855 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:28.804866 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:28.871702 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:28.871720 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:28.886327 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:28.886342 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:28.950915 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.950935 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:28.950945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:29.021787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:29.021807 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.549965 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:31.559848 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:31.559909 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:31.586473 1921402 cri.go:96] found id: ""
	I1222 00:34:31.586487 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.586494 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:31.586500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:31.586557 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:31.619635 1921402 cri.go:96] found id: ""
	I1222 00:34:31.619649 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.619657 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:31.619662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:31.619718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:31.649990 1921402 cri.go:96] found id: ""
	I1222 00:34:31.650004 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.650011 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:31.650017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:31.650079 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:31.676339 1921402 cri.go:96] found id: ""
	I1222 00:34:31.676354 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.676361 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:31.676366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:31.676424 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:31.704084 1921402 cri.go:96] found id: ""
	I1222 00:34:31.704098 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.704105 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:31.704110 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:31.704170 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:31.728706 1921402 cri.go:96] found id: ""
	I1222 00:34:31.728720 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.728727 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:31.728732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:31.728790 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:31.756276 1921402 cri.go:96] found id: ""
	I1222 00:34:31.756290 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.756297 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:31.756304 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:31.756314 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:31.825066 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:31.825085 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.853826 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:31.853841 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:31.920106 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:31.920125 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:31.934967 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:31.934983 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:31.996347 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.497426 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:34.507616 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:34.507676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:34.531742 1921402 cri.go:96] found id: ""
	I1222 00:34:34.531755 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.531763 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:34.531769 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:34.531843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:34.559670 1921402 cri.go:96] found id: ""
	I1222 00:34:34.559683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.559691 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:34.559695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:34.559762 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:34.587516 1921402 cri.go:96] found id: ""
	I1222 00:34:34.587531 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.587538 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:34.587543 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:34.587621 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:34.618669 1921402 cri.go:96] found id: ""
	I1222 00:34:34.618683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.618690 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:34.618695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:34.618754 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:34.655347 1921402 cri.go:96] found id: ""
	I1222 00:34:34.655361 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.655368 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:34.655373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:34.655429 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:34.680612 1921402 cri.go:96] found id: ""
	I1222 00:34:34.680626 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.680643 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:34.680648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:34.680718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:34.706049 1921402 cri.go:96] found id: ""
	I1222 00:34:34.706070 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.706078 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:34.706085 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:34.706095 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:34.774755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:34.774774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:34.789897 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:34.789917 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:34.854675 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.854686 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:34.854696 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:34.923449 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:34.923469 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.451135 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:37.461580 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:37.461643 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:37.488448 1921402 cri.go:96] found id: ""
	I1222 00:34:37.488474 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.488482 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:37.488488 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:37.488544 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:37.513499 1921402 cri.go:96] found id: ""
	I1222 00:34:37.513514 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.513526 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:37.513531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:37.513595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:37.539878 1921402 cri.go:96] found id: ""
	I1222 00:34:37.539892 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.539899 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:37.539905 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:37.539979 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:37.568838 1921402 cri.go:96] found id: ""
	I1222 00:34:37.568853 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.568860 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:37.568866 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:37.568934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:37.604110 1921402 cri.go:96] found id: ""
	I1222 00:34:37.604124 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.604131 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:37.604156 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:37.604212 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:37.639214 1921402 cri.go:96] found id: ""
	I1222 00:34:37.639228 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.639235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:37.639241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:37.639301 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:37.674749 1921402 cri.go:96] found id: ""
	I1222 00:34:37.674763 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.674771 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:37.674779 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:37.674790 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.709493 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:37.709513 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:37.781144 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:37.781164 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:37.797445 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:37.797461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:37.864668 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:37.864678 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:37.864688 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.444815 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:40.456172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:40.456239 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:40.482752 1921402 cri.go:96] found id: ""
	I1222 00:34:40.482774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.482781 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:40.482786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:40.482846 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:40.508000 1921402 cri.go:96] found id: ""
	I1222 00:34:40.508014 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.508021 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:40.508026 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:40.508087 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:40.533866 1921402 cri.go:96] found id: ""
	I1222 00:34:40.533881 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.533889 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:40.533894 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:40.533955 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:40.559781 1921402 cri.go:96] found id: ""
	I1222 00:34:40.559794 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.559802 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:40.559807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:40.559866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:40.586989 1921402 cri.go:96] found id: ""
	I1222 00:34:40.587003 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.587010 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:40.587016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:40.587074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:40.619369 1921402 cri.go:96] found id: ""
	I1222 00:34:40.619384 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.619400 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:40.619405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:40.619462 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:40.648846 1921402 cri.go:96] found id: ""
	I1222 00:34:40.648870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.648878 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:40.648886 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:40.648896 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:40.720161 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:40.720180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:40.736116 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:40.736141 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:40.808179 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:40.808193 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:40.808204 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.876486 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:40.876505 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.409595 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:43.419504 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:43.419564 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:43.444338 1921402 cri.go:96] found id: ""
	I1222 00:34:43.444360 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.444368 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:43.444373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:43.444430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:43.471938 1921402 cri.go:96] found id: ""
	I1222 00:34:43.471953 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.471960 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:43.471965 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:43.472025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:43.497576 1921402 cri.go:96] found id: ""
	I1222 00:34:43.497590 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.497597 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:43.497611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:43.497667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:43.526748 1921402 cri.go:96] found id: ""
	I1222 00:34:43.526762 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.526769 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:43.526775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:43.526838 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:43.556676 1921402 cri.go:96] found id: ""
	I1222 00:34:43.556691 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.556698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:43.556704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:43.556766 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:43.584425 1921402 cri.go:96] found id: ""
	I1222 00:34:43.584440 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.584447 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:43.584453 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:43.584514 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:43.619760 1921402 cri.go:96] found id: ""
	I1222 00:34:43.619774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.619782 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:43.619789 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:43.619813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:43.693963 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:43.693985 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.722757 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:43.722773 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:43.790316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:43.790336 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:43.804810 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:43.804829 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:43.866141 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.366416 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:46.376013 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:46.376073 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:46.402801 1921402 cri.go:96] found id: ""
	I1222 00:34:46.402815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.402822 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:46.402828 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:46.402884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:46.431904 1921402 cri.go:96] found id: ""
	I1222 00:34:46.431920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.431927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:46.431933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:46.431991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:46.456574 1921402 cri.go:96] found id: ""
	I1222 00:34:46.456588 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.456594 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:46.456600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:46.456656 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:46.481542 1921402 cri.go:96] found id: ""
	I1222 00:34:46.481556 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.481563 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:46.481569 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:46.481627 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:46.507668 1921402 cri.go:96] found id: ""
	I1222 00:34:46.507682 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.507689 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:46.507694 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:46.507756 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:46.531662 1921402 cri.go:96] found id: ""
	I1222 00:34:46.531675 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.531683 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:46.531688 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:46.531746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:46.558345 1921402 cri.go:96] found id: ""
	I1222 00:34:46.558359 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.558367 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:46.558375 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:46.558385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:46.630348 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:46.630367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:46.648136 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:46.648153 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:46.720659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.720669 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:46.720689 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:46.790237 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:46.790255 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.320987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:49.331027 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:49.331083 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:49.355873 1921402 cri.go:96] found id: ""
	I1222 00:34:49.355887 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.355903 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:49.355909 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:49.355967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:49.381127 1921402 cri.go:96] found id: ""
	I1222 00:34:49.381151 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.381158 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:49.381163 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:49.381231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:49.407961 1921402 cri.go:96] found id: ""
	I1222 00:34:49.407983 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.407990 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:49.407995 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:49.408062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:49.437856 1921402 cri.go:96] found id: ""
	I1222 00:34:49.437870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.437877 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:49.437883 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:49.437940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:49.462901 1921402 cri.go:96] found id: ""
	I1222 00:34:49.462914 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.462921 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:49.462927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:49.462981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:49.487289 1921402 cri.go:96] found id: ""
	I1222 00:34:49.487303 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.487309 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:49.487314 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:49.487372 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:49.513186 1921402 cri.go:96] found id: ""
	I1222 00:34:49.513200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.513208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:49.513215 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:49.513225 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:49.527889 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:49.527904 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:49.591170 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:49.591180 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:49.591189 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:49.670799 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:49.670817 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.698837 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:49.698853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.268261 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:52.278246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:52.278304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:52.303014 1921402 cri.go:96] found id: ""
	I1222 00:34:52.303035 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.303043 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:52.303048 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:52.303114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:52.331907 1921402 cri.go:96] found id: ""
	I1222 00:34:52.331921 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.331928 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:52.331933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:52.331989 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:52.357227 1921402 cri.go:96] found id: ""
	I1222 00:34:52.357240 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.357247 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:52.357253 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:52.357315 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:52.383730 1921402 cri.go:96] found id: ""
	I1222 00:34:52.383745 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.383752 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:52.383758 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:52.383819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:52.409605 1921402 cri.go:96] found id: ""
	I1222 00:34:52.409620 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.409636 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:52.409641 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:52.409701 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:52.438817 1921402 cri.go:96] found id: ""
	I1222 00:34:52.438831 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.438838 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:52.438843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:52.438901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:52.464983 1921402 cri.go:96] found id: ""
	I1222 00:34:52.465007 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.465016 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:52.465027 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:52.465038 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:52.533460 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:52.533470 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:52.533493 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:52.602597 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:52.602617 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:52.636250 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:52.636266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.708981 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:52.709002 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.224727 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:55.235168 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:55.235225 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:55.260911 1921402 cri.go:96] found id: ""
	I1222 00:34:55.260926 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.260933 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:55.260938 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:55.260994 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:55.286615 1921402 cri.go:96] found id: ""
	I1222 00:34:55.286634 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.286651 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:55.286657 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:55.286736 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:55.311470 1921402 cri.go:96] found id: ""
	I1222 00:34:55.311484 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.311492 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:55.311497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:55.311555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:55.336659 1921402 cri.go:96] found id: ""
	I1222 00:34:55.336673 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.336689 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:55.336695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:55.336755 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:55.368789 1921402 cri.go:96] found id: ""
	I1222 00:34:55.368815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.368823 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:55.368829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:55.368895 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:55.394360 1921402 cri.go:96] found id: ""
	I1222 00:34:55.394374 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.394381 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:55.394387 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:55.394448 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:55.419182 1921402 cri.go:96] found id: ""
	I1222 00:34:55.419196 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.419204 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:55.419211 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:55.419224 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:55.486298 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:55.486316 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.501160 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:55.501179 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:55.569075 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:55.569085 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:55.569105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:55.643182 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:55.643207 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.182663 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:58.193476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:58.193537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:58.219859 1921402 cri.go:96] found id: ""
	I1222 00:34:58.219873 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.219880 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:58.219886 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:58.219944 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:58.245686 1921402 cri.go:96] found id: ""
	I1222 00:34:58.245700 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.245707 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:58.245712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:58.245769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:58.272173 1921402 cri.go:96] found id: ""
	I1222 00:34:58.272187 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.272194 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:58.272199 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:58.272258 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:58.296505 1921402 cri.go:96] found id: ""
	I1222 00:34:58.296519 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.296526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:58.296532 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:58.296590 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:58.321897 1921402 cri.go:96] found id: ""
	I1222 00:34:58.321911 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.321919 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:58.321924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:58.321981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:58.347185 1921402 cri.go:96] found id: ""
	I1222 00:34:58.347200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.347206 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:58.347212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:58.347272 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:58.371593 1921402 cri.go:96] found id: ""
	I1222 00:34:58.371607 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.371614 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:58.371621 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:58.371631 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:58.439841 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:58.439861 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.471817 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:58.471833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:58.540043 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:58.540062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:58.556739 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:58.556755 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:58.632107 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.133446 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:01.145134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:01.145201 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:01.174836 1921402 cri.go:96] found id: ""
	I1222 00:35:01.174853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.174861 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:01.174867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:01.174943 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:01.202691 1921402 cri.go:96] found id: ""
	I1222 00:35:01.202706 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.202715 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:01.202721 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:01.202785 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:01.232555 1921402 cri.go:96] found id: ""
	I1222 00:35:01.232578 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.232586 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:01.232592 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:01.232687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:01.260828 1921402 cri.go:96] found id: ""
	I1222 00:35:01.260853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.260861 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:01.260867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:01.260954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:01.290608 1921402 cri.go:96] found id: ""
	I1222 00:35:01.290623 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.290630 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:01.290635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:01.290721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:01.321704 1921402 cri.go:96] found id: ""
	I1222 00:35:01.321719 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.321726 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:01.321732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:01.321802 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:01.349512 1921402 cri.go:96] found id: ""
	I1222 00:35:01.349526 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.349533 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:01.349540 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:01.349551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:01.385559 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:01.385575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:01.457873 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:01.457900 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:01.473910 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:01.473928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:01.540419 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.540431 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:01.540442 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.114962 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:04.126202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:04.126262 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:04.157781 1921402 cri.go:96] found id: ""
	I1222 00:35:04.157795 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.157802 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:04.157807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:04.157868 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:04.185090 1921402 cri.go:96] found id: ""
	I1222 00:35:04.185104 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.185114 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:04.185119 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:04.185189 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:04.210432 1921402 cri.go:96] found id: ""
	I1222 00:35:04.210446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.210453 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:04.210458 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:04.210518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:04.235769 1921402 cri.go:96] found id: ""
	I1222 00:35:04.235783 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.235791 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:04.235797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:04.235853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:04.260844 1921402 cri.go:96] found id: ""
	I1222 00:35:04.260858 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.260867 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:04.260872 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:04.260940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:04.286088 1921402 cri.go:96] found id: ""
	I1222 00:35:04.286103 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.286110 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:04.286115 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:04.286171 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:04.311902 1921402 cri.go:96] found id: ""
	I1222 00:35:04.311916 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.311923 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:04.311930 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:04.311945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:04.383800 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:04.383820 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:04.398944 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:04.398960 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:04.463648 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:04.463658 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:04.463668 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.533070 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:04.533091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:07.062949 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:07.072997 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:07.073058 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:07.104724 1921402 cri.go:96] found id: ""
	I1222 00:35:07.104739 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.104746 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:07.104751 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:07.104808 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:07.142683 1921402 cri.go:96] found id: ""
	I1222 00:35:07.142697 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.142704 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:07.142712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:07.142772 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:07.168569 1921402 cri.go:96] found id: ""
	I1222 00:35:07.168583 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.168590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:07.168596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:07.168654 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:07.196279 1921402 cri.go:96] found id: ""
	I1222 00:35:07.196295 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.196302 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:07.196307 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:07.196366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:07.220862 1921402 cri.go:96] found id: ""
	I1222 00:35:07.220876 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.220883 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:07.220888 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:07.220967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:07.247783 1921402 cri.go:96] found id: ""
	I1222 00:35:07.247797 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.247804 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:07.247810 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:07.247866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:07.272632 1921402 cri.go:96] found id: ""
	I1222 00:35:07.272658 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.272668 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:07.272675 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:07.272685 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:07.342006 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:07.342026 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:07.357444 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:07.357461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:07.428354 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:07.428365 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:07.428376 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:07.499031 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:07.499051 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.035662 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:10.047188 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:10.047256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:10.074669 1921402 cri.go:96] found id: ""
	I1222 00:35:10.074694 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.074702 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:10.074708 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:10.074775 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:10.111688 1921402 cri.go:96] found id: ""
	I1222 00:35:10.111702 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.111710 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:10.111716 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:10.111816 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:10.141600 1921402 cri.go:96] found id: ""
	I1222 00:35:10.141614 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.141622 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:10.141635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:10.141697 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:10.170620 1921402 cri.go:96] found id: ""
	I1222 00:35:10.170635 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.170642 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:10.170648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:10.170709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:10.196076 1921402 cri.go:96] found id: ""
	I1222 00:35:10.196090 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.196098 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:10.196111 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:10.196173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:10.222435 1921402 cri.go:96] found id: ""
	I1222 00:35:10.222449 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.222456 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:10.222461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:10.222518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:10.248319 1921402 cri.go:96] found id: ""
	I1222 00:35:10.248333 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.248350 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:10.248356 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:10.248367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:10.263245 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:10.263267 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:10.325942 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:10.325966 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:10.325977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:10.395942 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:10.395961 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.423671 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:10.423686 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:12.990380 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:13.000511 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:13.000585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:13.028285 1921402 cri.go:96] found id: ""
	I1222 00:35:13.028299 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.028306 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:13.028311 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:13.028380 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:13.055795 1921402 cri.go:96] found id: ""
	I1222 00:35:13.055809 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.055816 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:13.055821 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:13.055878 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:13.083562 1921402 cri.go:96] found id: ""
	I1222 00:35:13.083576 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.083584 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:13.083589 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:13.083647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:13.123340 1921402 cri.go:96] found id: ""
	I1222 00:35:13.123353 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.123360 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:13.123366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:13.123422 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:13.160293 1921402 cri.go:96] found id: ""
	I1222 00:35:13.160306 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.160314 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:13.160321 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:13.160393 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:13.186389 1921402 cri.go:96] found id: ""
	I1222 00:35:13.186403 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.186410 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:13.186415 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:13.186470 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:13.214187 1921402 cri.go:96] found id: ""
	I1222 00:35:13.214202 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.214208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:13.214216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:13.214228 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:13.241162 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:13.241178 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:13.306852 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:13.306871 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:13.322142 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:13.322158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:13.389384 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:13.389394 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:13.389419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:15.963833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:15.973870 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:15.973930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:15.999121 1921402 cri.go:96] found id: ""
	I1222 00:35:15.999135 1921402 logs.go:282] 0 containers: []
	W1222 00:35:15.999142 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:15.999147 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:15.999203 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:16.027303 1921402 cri.go:96] found id: ""
	I1222 00:35:16.027317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.027324 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:16.027330 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:16.027390 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:16.056726 1921402 cri.go:96] found id: ""
	I1222 00:35:16.056740 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.056747 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:16.056753 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:16.056809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:16.082920 1921402 cri.go:96] found id: ""
	I1222 00:35:16.082934 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.082941 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:16.082946 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:16.083011 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:16.120145 1921402 cri.go:96] found id: ""
	I1222 00:35:16.120160 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.120167 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:16.120172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:16.120228 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:16.149800 1921402 cri.go:96] found id: ""
	I1222 00:35:16.149814 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.149821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:16.149827 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:16.149883 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:16.175126 1921402 cri.go:96] found id: ""
	I1222 00:35:16.175140 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.175147 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:16.175155 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:16.175168 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:16.241780 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:16.241800 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:16.256196 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:16.256212 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:16.317803 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:16.317813 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:16.317824 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:16.386643 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:16.386662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:18.918855 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:18.928947 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:18.929009 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:18.954131 1921402 cri.go:96] found id: ""
	I1222 00:35:18.954151 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.954158 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:18.954169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:18.954229 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:18.978729 1921402 cri.go:96] found id: ""
	I1222 00:35:18.978780 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.978788 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:18.978793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:18.978859 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:19.006460 1921402 cri.go:96] found id: ""
	I1222 00:35:19.006474 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.006482 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:19.006487 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:19.006547 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:19.035802 1921402 cri.go:96] found id: ""
	I1222 00:35:19.035818 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.035825 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:19.035830 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:19.035889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:19.061703 1921402 cri.go:96] found id: ""
	I1222 00:35:19.061716 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.061723 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:19.061729 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:19.061789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:19.094243 1921402 cri.go:96] found id: ""
	I1222 00:35:19.094256 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.094264 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:19.094269 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:19.094324 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:19.135211 1921402 cri.go:96] found id: ""
	I1222 00:35:19.135238 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.135246 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:19.135253 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:19.135263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:19.203366 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:19.203376 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:19.203385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:19.275567 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:19.275585 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:19.303090 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:19.303105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:19.372108 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:19.372127 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:21.888736 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:21.899401 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:21.899461 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:21.924030 1921402 cri.go:96] found id: ""
	I1222 00:35:21.924045 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.924052 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:21.924057 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:21.924114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:21.950593 1921402 cri.go:96] found id: ""
	I1222 00:35:21.950606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.950613 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:21.950618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:21.950676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:21.978354 1921402 cri.go:96] found id: ""
	I1222 00:35:21.978368 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.978375 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:21.978380 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:21.978437 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:22.005709 1921402 cri.go:96] found id: ""
	I1222 00:35:22.005724 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.005734 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:22.005741 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:22.005807 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:22.033771 1921402 cri.go:96] found id: ""
	I1222 00:35:22.033785 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.033793 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:22.033799 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:22.033862 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:22.063584 1921402 cri.go:96] found id: ""
	I1222 00:35:22.063598 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.063606 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:22.063611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:22.063672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:22.090629 1921402 cri.go:96] found id: ""
	I1222 00:35:22.090643 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.090651 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:22.090658 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:22.090669 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:22.172686 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:22.172713 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:22.192440 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:22.192459 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:22.261299 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:22.261309 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:22.261339 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:22.337282 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:22.337301 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:24.868746 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:24.878846 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:24.878904 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:24.903813 1921402 cri.go:96] found id: ""
	I1222 00:35:24.903828 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.903835 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:24.903841 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:24.903901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:24.930142 1921402 cri.go:96] found id: ""
	I1222 00:35:24.930157 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.930164 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:24.930169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:24.930224 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:24.954989 1921402 cri.go:96] found id: ""
	I1222 00:35:24.955009 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.955016 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:24.955022 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:24.955082 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:24.984399 1921402 cri.go:96] found id: ""
	I1222 00:35:24.984413 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.984421 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:24.984426 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:24.984489 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:25.016621 1921402 cri.go:96] found id: ""
	I1222 00:35:25.016637 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.016644 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:25.016650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:25.016714 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:25.044807 1921402 cri.go:96] found id: ""
	I1222 00:35:25.044821 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.044830 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:25.044835 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:25.044926 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:25.071499 1921402 cri.go:96] found id: ""
	I1222 00:35:25.071514 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.071521 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:25.071530 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:25.071541 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:25.149156 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:25.149180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:25.168790 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:25.168813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:25.235601 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:25.235611 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:25.235622 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:25.306834 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:25.306854 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:27.843405 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:27.854516 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:27.854584 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:27.881946 1921402 cri.go:96] found id: ""
	I1222 00:35:27.881960 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.881967 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:27.881972 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:27.882034 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:27.911889 1921402 cri.go:96] found id: ""
	I1222 00:35:27.911903 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.911922 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:27.911927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:27.911984 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:27.938642 1921402 cri.go:96] found id: ""
	I1222 00:35:27.938656 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.938687 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:27.938693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:27.938770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:27.965296 1921402 cri.go:96] found id: ""
	I1222 00:35:27.965317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.965336 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:27.965342 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:27.965410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:27.990233 1921402 cri.go:96] found id: ""
	I1222 00:35:27.990247 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.990254 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:27.990260 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:27.990316 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:28.017956 1921402 cri.go:96] found id: ""
	I1222 00:35:28.017970 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.017986 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:28.017993 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:28.018051 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:28.046152 1921402 cri.go:96] found id: ""
	I1222 00:35:28.046174 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.046182 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:28.046189 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:28.046200 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:28.114636 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:28.114656 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:28.131317 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:28.131333 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:28.201574 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:28.201584 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:28.201594 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:28.273807 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:28.273827 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:30.802833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:30.812760 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:30.812828 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:30.837592 1921402 cri.go:96] found id: ""
	I1222 00:35:30.837606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.837613 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:30.837618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:30.837678 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:30.866275 1921402 cri.go:96] found id: ""
	I1222 00:35:30.866289 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.866296 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:30.866301 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:30.866360 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:30.891264 1921402 cri.go:96] found id: ""
	I1222 00:35:30.891279 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.891286 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:30.891292 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:30.891350 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:30.915256 1921402 cri.go:96] found id: ""
	I1222 00:35:30.915270 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.915277 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:30.915288 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:30.915344 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:30.939875 1921402 cri.go:96] found id: ""
	I1222 00:35:30.939888 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.939895 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:30.939901 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:30.939959 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:30.964213 1921402 cri.go:96] found id: ""
	I1222 00:35:30.964227 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.964235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:30.964240 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:30.964298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:30.989063 1921402 cri.go:96] found id: ""
	I1222 00:35:30.989076 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.989084 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:30.989091 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:30.989104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:31.055959 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:31.055969 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:31.055980 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:31.130704 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:31.130724 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:31.175086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:31.175102 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:31.241755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:31.241774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:33.758213 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:33.768479 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:33.768543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:33.800290 1921402 cri.go:96] found id: ""
	I1222 00:35:33.800304 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.800311 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:33.800317 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:33.800374 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:33.828235 1921402 cri.go:96] found id: ""
	I1222 00:35:33.828249 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.828256 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:33.828261 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:33.828319 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:33.853431 1921402 cri.go:96] found id: ""
	I1222 00:35:33.853446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.853454 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:33.853459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:33.853517 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:33.878506 1921402 cri.go:96] found id: ""
	I1222 00:35:33.878520 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.878526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:33.878531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:33.878589 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:33.903278 1921402 cri.go:96] found id: ""
	I1222 00:35:33.903292 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.903299 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:33.903305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:33.903361 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:33.931624 1921402 cri.go:96] found id: ""
	I1222 00:35:33.931638 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.931645 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:33.931650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:33.931709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:33.960994 1921402 cri.go:96] found id: ""
	I1222 00:35:33.961013 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.961029 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:33.961042 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:33.961053 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:34.028536 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:34.028559 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:34.043699 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:34.043717 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:34.113813 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:34.113822 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:34.113832 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:34.196400 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:34.196421 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:36.727299 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:36.737834 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:36.737892 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:36.762640 1921402 cri.go:96] found id: ""
	I1222 00:35:36.762654 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.762661 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:36.762666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:36.762722 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:36.789016 1921402 cri.go:96] found id: ""
	I1222 00:35:36.789030 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.789036 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:36.789041 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:36.789097 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:36.814574 1921402 cri.go:96] found id: ""
	I1222 00:35:36.814588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.814595 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:36.814600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:36.814661 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:36.839346 1921402 cri.go:96] found id: ""
	I1222 00:35:36.839361 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.839368 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:36.839374 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:36.839430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:36.868832 1921402 cri.go:96] found id: ""
	I1222 00:35:36.868846 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.868853 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:36.868858 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:36.868930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:36.894581 1921402 cri.go:96] found id: ""
	I1222 00:35:36.894595 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.894602 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:36.894607 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:36.894664 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:36.924019 1921402 cri.go:96] found id: ""
	I1222 00:35:36.924033 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.924040 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:36.924047 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:36.924059 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:36.990902 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:36.990921 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:37.008849 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:37.008867 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:37.083336 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:37.083345 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:37.083356 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:37.173618 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:37.173638 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:39.706094 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:39.716405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:39.716478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:39.746846 1921402 cri.go:96] found id: ""
	I1222 00:35:39.746861 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.746868 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:39.746873 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:39.746933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:39.775788 1921402 cri.go:96] found id: ""
	I1222 00:35:39.775802 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.775810 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:39.775815 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:39.775872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:39.800503 1921402 cri.go:96] found id: ""
	I1222 00:35:39.800523 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.800531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:39.800536 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:39.800595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:39.829015 1921402 cri.go:96] found id: ""
	I1222 00:35:39.829028 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.829035 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:39.829040 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:39.829096 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:39.856667 1921402 cri.go:96] found id: ""
	I1222 00:35:39.856680 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.856688 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:39.856693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:39.856752 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:39.881236 1921402 cri.go:96] found id: ""
	I1222 00:35:39.881259 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.881266 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:39.881272 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:39.881365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:39.907774 1921402 cri.go:96] found id: ""
	I1222 00:35:39.907788 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.907795 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:39.907803 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:39.907814 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:39.973961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:39.973979 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:39.988619 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:39.988633 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:40.060659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:40.060671 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:40.060682 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:40.133570 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:40.133590 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:42.668577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:42.678942 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:42.678998 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:42.708217 1921402 cri.go:96] found id: ""
	I1222 00:35:42.708231 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.708237 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:42.708243 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:42.708298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:42.733470 1921402 cri.go:96] found id: ""
	I1222 00:35:42.733484 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.733491 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:42.733497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:42.733555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:42.758025 1921402 cri.go:96] found id: ""
	I1222 00:35:42.758039 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.758050 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:42.758056 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:42.758114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:42.783070 1921402 cri.go:96] found id: ""
	I1222 00:35:42.783084 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.783092 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:42.783097 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:42.783156 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:42.813432 1921402 cri.go:96] found id: ""
	I1222 00:35:42.813447 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.813454 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:42.813459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:42.813522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:42.841107 1921402 cri.go:96] found id: ""
	I1222 00:35:42.841121 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.841137 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:42.841143 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:42.841208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:42.866521 1921402 cri.go:96] found id: ""
	I1222 00:35:42.866535 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.866543 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:42.866551 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:42.866562 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:42.933074 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:42.933092 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:42.950000 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:42.950025 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:43.021528 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:43.021539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:43.021551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:43.096285 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:43.096308 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:45.625156 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:45.635680 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:45.635759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:45.665749 1921402 cri.go:96] found id: ""
	I1222 00:35:45.665763 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.665770 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:45.665775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:45.665833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:45.692343 1921402 cri.go:96] found id: ""
	I1222 00:35:45.692357 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.692364 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:45.692370 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:45.692432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:45.721093 1921402 cri.go:96] found id: ""
	I1222 00:35:45.721108 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.721115 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:45.721120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:45.721180 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:45.747092 1921402 cri.go:96] found id: ""
	I1222 00:35:45.747107 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.747114 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:45.747120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:45.747177 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:45.776676 1921402 cri.go:96] found id: ""
	I1222 00:35:45.776691 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.776698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:45.776704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:45.776760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:45.802786 1921402 cri.go:96] found id: ""
	I1222 00:35:45.802800 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.802808 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:45.802814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:45.802873 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:45.828248 1921402 cri.go:96] found id: ""
	I1222 00:35:45.828269 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.828279 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:45.828287 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:45.828299 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:45.844960 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:45.844976 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:45.915742 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:45.915756 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:45.915766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:45.994270 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:45.994295 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:46.025259 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:46.025275 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:48.596430 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:48.606663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:48.606725 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:48.635341 1921402 cri.go:96] found id: ""
	I1222 00:35:48.635356 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.635363 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:48.635369 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:48.635430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:48.661161 1921402 cri.go:96] found id: ""
	I1222 00:35:48.661184 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.661191 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:48.661196 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:48.661259 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:48.690450 1921402 cri.go:96] found id: ""
	I1222 00:35:48.690464 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.690470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:48.690476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:48.690537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:48.716347 1921402 cri.go:96] found id: ""
	I1222 00:35:48.716363 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.716388 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:48.716409 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:48.716477 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:48.746573 1921402 cri.go:96] found id: ""
	I1222 00:35:48.746588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.746595 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:48.746600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:48.746659 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:48.772755 1921402 cri.go:96] found id: ""
	I1222 00:35:48.772770 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.772777 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:48.772783 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:48.772843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:48.799586 1921402 cri.go:96] found id: ""
	I1222 00:35:48.799600 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.799607 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:48.799615 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:48.799629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:48.860067 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:48.860076 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:48.860087 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:48.930351 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:48.930371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:48.960079 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:48.960099 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:49.028235 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:49.028254 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.545623 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:51.556102 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:51.556160 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:51.584081 1921402 cri.go:96] found id: ""
	I1222 00:35:51.584095 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.584102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:51.584108 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:51.584166 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:51.608602 1921402 cri.go:96] found id: ""
	I1222 00:35:51.608616 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.608623 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:51.608628 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:51.608684 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:51.638215 1921402 cri.go:96] found id: ""
	I1222 00:35:51.638229 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.638236 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:51.638242 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:51.638300 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:51.673979 1921402 cri.go:96] found id: ""
	I1222 00:35:51.673993 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.674000 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:51.674005 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:51.674062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:51.703793 1921402 cri.go:96] found id: ""
	I1222 00:35:51.703807 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.703814 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:51.703820 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:51.703879 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:51.731106 1921402 cri.go:96] found id: ""
	I1222 00:35:51.731120 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.731128 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:51.731133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:51.731191 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:51.758738 1921402 cri.go:96] found id: ""
	I1222 00:35:51.758752 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.758759 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:51.758766 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:51.758776 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:51.788397 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:51.788413 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:51.855218 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:51.855236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.870464 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:51.870480 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:51.935854 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:51.935864 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:51.935874 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:54.506346 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:54.516494 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:54.516555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:54.542861 1921402 cri.go:96] found id: ""
	I1222 00:35:54.542877 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.542884 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:54.542889 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:54.542947 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:54.567393 1921402 cri.go:96] found id: ""
	I1222 00:35:54.567409 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.567417 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:54.567422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:54.567480 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:54.596220 1921402 cri.go:96] found id: ""
	I1222 00:35:54.596234 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.596241 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:54.596246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:54.596307 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:54.621195 1921402 cri.go:96] found id: ""
	I1222 00:35:54.621210 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.621217 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:54.621223 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:54.621286 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:54.647655 1921402 cri.go:96] found id: ""
	I1222 00:35:54.647669 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.647676 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:54.647681 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:54.647746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:54.673352 1921402 cri.go:96] found id: ""
	I1222 00:35:54.673365 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.673372 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:54.673378 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:54.673434 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:54.702345 1921402 cri.go:96] found id: ""
	I1222 00:35:54.702359 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.702366 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:54.702373 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:54.702385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:54.729725 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:54.729740 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:54.795556 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:54.795573 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:54.810399 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:54.810415 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:54.873205 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:54.873215 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:54.873236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.442564 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:57.452367 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:57.452430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:57.479752 1921402 cri.go:96] found id: ""
	I1222 00:35:57.479765 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.479773 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:57.479780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:57.479843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:57.505262 1921402 cri.go:96] found id: ""
	I1222 00:35:57.505277 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.505286 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:57.505291 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:57.505367 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:57.530750 1921402 cri.go:96] found id: ""
	I1222 00:35:57.530764 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.530771 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:57.530777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:57.530833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:57.557273 1921402 cri.go:96] found id: ""
	I1222 00:35:57.557286 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.557293 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:57.557298 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:57.557375 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:57.589790 1921402 cri.go:96] found id: ""
	I1222 00:35:57.589803 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.589810 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:57.589816 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:57.589872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:57.615560 1921402 cri.go:96] found id: ""
	I1222 00:35:57.615573 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.615580 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:57.615585 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:57.615647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:57.643950 1921402 cri.go:96] found id: ""
	I1222 00:35:57.643964 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.643971 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:57.643979 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:57.643989 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:57.710337 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:57.710357 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:57.725273 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:57.725289 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:57.792176 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:57.792186 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:57.792202 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.869558 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:57.869578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:00.405062 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:00.432605 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:00.432680 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:00.470863 1921402 cri.go:96] found id: ""
	I1222 00:36:00.470879 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.470886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:00.470892 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:00.470958 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:00.501706 1921402 cri.go:96] found id: ""
	I1222 00:36:00.501723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.501736 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:00.501743 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:00.501825 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:00.531718 1921402 cri.go:96] found id: ""
	I1222 00:36:00.531733 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.531741 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:00.531746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:00.531814 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:00.560495 1921402 cri.go:96] found id: ""
	I1222 00:36:00.560511 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.560518 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:00.560525 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:00.560585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:00.588202 1921402 cri.go:96] found id: ""
	I1222 00:36:00.588217 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.588225 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:00.588230 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:00.588293 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:00.616313 1921402 cri.go:96] found id: ""
	I1222 00:36:00.616328 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.616335 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:00.616341 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:00.616402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:00.647784 1921402 cri.go:96] found id: ""
	I1222 00:36:00.647799 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.647806 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:00.647814 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:00.647823 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:00.715096 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:00.715115 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:00.730829 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:00.730845 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:00.797617 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:00.797630 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:00.797642 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:00.866765 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:00.866784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.396597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:03.407146 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:03.407208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:03.433039 1921402 cri.go:96] found id: ""
	I1222 00:36:03.433053 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.433061 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:03.433066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:03.433141 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:03.458706 1921402 cri.go:96] found id: ""
	I1222 00:36:03.458720 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.458727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:03.458732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:03.458789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:03.484510 1921402 cri.go:96] found id: ""
	I1222 00:36:03.484524 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.484531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:03.484537 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:03.484607 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:03.510384 1921402 cri.go:96] found id: ""
	I1222 00:36:03.510399 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.510407 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:03.510413 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:03.510475 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:03.543748 1921402 cri.go:96] found id: ""
	I1222 00:36:03.543763 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.543770 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:03.543775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:03.543833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:03.571136 1921402 cri.go:96] found id: ""
	I1222 00:36:03.571150 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.571157 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:03.571162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:03.571219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:03.600085 1921402 cri.go:96] found id: ""
	I1222 00:36:03.600100 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.600107 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:03.600113 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:03.600124 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:03.664241 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:03.664252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:03.664264 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:03.738233 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:03.738260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.773144 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:03.773160 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:03.841880 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:03.841901 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.357571 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:06.368813 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:06.368874 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:06.400278 1921402 cri.go:96] found id: ""
	I1222 00:36:06.400293 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.400301 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:06.400312 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:06.400373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:06.430787 1921402 cri.go:96] found id: ""
	I1222 00:36:06.430801 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.430809 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:06.430814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:06.430872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:06.455395 1921402 cri.go:96] found id: ""
	I1222 00:36:06.455409 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.455416 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:06.455421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:06.455478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:06.485881 1921402 cri.go:96] found id: ""
	I1222 00:36:06.485895 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.485902 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:06.485908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:06.485981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:06.512824 1921402 cri.go:96] found id: ""
	I1222 00:36:06.512838 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.512845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:06.512850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:06.512918 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:06.540462 1921402 cri.go:96] found id: ""
	I1222 00:36:06.540477 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.540494 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:06.540500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:06.540572 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:06.566709 1921402 cri.go:96] found id: ""
	I1222 00:36:06.566723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.566731 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:06.566738 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:06.566750 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:06.634309 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:06.634331 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.649342 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:06.649359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:06.719108 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:06.719119 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:06.719129 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:06.791062 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:06.791080 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.321217 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:09.331643 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:09.331702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:09.370112 1921402 cri.go:96] found id: ""
	I1222 00:36:09.370139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.370146 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:09.370152 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:09.370216 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:09.400020 1921402 cri.go:96] found id: ""
	I1222 00:36:09.400041 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.400049 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:09.400055 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:09.400126 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:09.436087 1921402 cri.go:96] found id: ""
	I1222 00:36:09.436114 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.436121 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:09.436126 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:09.436194 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:09.463697 1921402 cri.go:96] found id: ""
	I1222 00:36:09.463711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.463718 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:09.463724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:09.463780 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:09.491471 1921402 cri.go:96] found id: ""
	I1222 00:36:09.491485 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.491492 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:09.491498 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:09.491555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:09.516866 1921402 cri.go:96] found id: ""
	I1222 00:36:09.516880 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.516903 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:09.516908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:09.516967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:09.545747 1921402 cri.go:96] found id: ""
	I1222 00:36:09.545761 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.545768 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:09.545777 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:09.545788 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:09.607682 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:09.607692 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:09.607702 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:09.682101 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:09.682120 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.713866 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:09.713881 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:09.781658 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:09.781677 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.297278 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:12.307636 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:12.307707 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:12.337251 1921402 cri.go:96] found id: ""
	I1222 00:36:12.337266 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.337273 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:12.337278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:12.337373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:12.371706 1921402 cri.go:96] found id: ""
	I1222 00:36:12.371719 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.371727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:12.371732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:12.371789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:12.411047 1921402 cri.go:96] found id: ""
	I1222 00:36:12.411060 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.411067 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:12.411072 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:12.411132 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:12.435536 1921402 cri.go:96] found id: ""
	I1222 00:36:12.435550 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.435557 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:12.435562 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:12.435623 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:12.461780 1921402 cri.go:96] found id: ""
	I1222 00:36:12.461794 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.461802 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:12.461808 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:12.461872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:12.487870 1921402 cri.go:96] found id: ""
	I1222 00:36:12.487884 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.487891 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:12.487897 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:12.487954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:12.513933 1921402 cri.go:96] found id: ""
	I1222 00:36:12.513947 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.513954 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:12.513961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:12.513972 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.528919 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:12.528934 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:12.596732 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:12.596742 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:12.596753 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:12.669434 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:12.669452 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:12.702088 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:12.702104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:15.269222 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:15.279437 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:15.279497 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:15.308005 1921402 cri.go:96] found id: ""
	I1222 00:36:15.308019 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.308026 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:15.308034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:15.308104 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:15.335800 1921402 cri.go:96] found id: ""
	I1222 00:36:15.335814 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.335822 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:15.335829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:15.335889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:15.364041 1921402 cri.go:96] found id: ""
	I1222 00:36:15.364055 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.364064 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:15.364070 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:15.364127 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:15.396634 1921402 cri.go:96] found id: ""
	I1222 00:36:15.396649 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.396657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:15.396663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:15.396721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:15.427551 1921402 cri.go:96] found id: ""
	I1222 00:36:15.427565 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.427572 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:15.427578 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:15.427640 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:15.462972 1921402 cri.go:96] found id: ""
	I1222 00:36:15.462987 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.462994 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:15.463000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:15.463057 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:15.488170 1921402 cri.go:96] found id: ""
	I1222 00:36:15.488184 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.488201 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:15.488208 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:15.488219 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:15.503474 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:15.503499 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:15.576848 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:15.576859 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:15.576870 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:15.647239 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:15.647258 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:15.677589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:15.677604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.247711 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:18.259251 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:18.259314 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:18.285041 1921402 cri.go:96] found id: ""
	I1222 00:36:18.285056 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.285063 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:18.285068 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:18.285129 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:18.312429 1921402 cri.go:96] found id: ""
	I1222 00:36:18.312442 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.312449 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:18.312455 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:18.312509 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:18.342758 1921402 cri.go:96] found id: ""
	I1222 00:36:18.342772 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.342779 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:18.342786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:18.342844 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:18.376484 1921402 cri.go:96] found id: ""
	I1222 00:36:18.376497 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.376505 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:18.376510 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:18.376567 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:18.411956 1921402 cri.go:96] found id: ""
	I1222 00:36:18.411970 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.411987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:18.411992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:18.412059 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:18.441521 1921402 cri.go:96] found id: ""
	I1222 00:36:18.441535 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.441542 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:18.441549 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:18.441606 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:18.466320 1921402 cri.go:96] found id: ""
	I1222 00:36:18.466335 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.466342 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:18.466349 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:18.466359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.540350 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:18.540370 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:18.555809 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:18.555826 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:18.620233 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:18.620247 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:18.620260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:18.690556 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:18.690578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.222964 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:21.233019 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:21.233077 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:21.258106 1921402 cri.go:96] found id: ""
	I1222 00:36:21.258120 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.258127 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:21.258133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:21.258190 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:21.282679 1921402 cri.go:96] found id: ""
	I1222 00:36:21.282693 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.282699 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:21.282704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:21.282761 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:21.311750 1921402 cri.go:96] found id: ""
	I1222 00:36:21.311764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.311789 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:21.311794 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:21.311861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:21.340844 1921402 cri.go:96] found id: ""
	I1222 00:36:21.340858 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.340866 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:21.340871 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:21.340940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:21.375873 1921402 cri.go:96] found id: ""
	I1222 00:36:21.375898 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.375905 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:21.375910 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:21.375973 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:21.406752 1921402 cri.go:96] found id: ""
	I1222 00:36:21.406767 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.406775 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:21.406780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:21.406845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:21.430878 1921402 cri.go:96] found id: ""
	I1222 00:36:21.430892 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.430899 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:21.430906 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:21.430925 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:21.499738 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:21.499757 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.532589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:21.532604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:21.600165 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:21.600183 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:21.615391 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:21.615407 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:21.684109 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.185197 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:24.196664 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:24.196744 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:24.223518 1921402 cri.go:96] found id: ""
	I1222 00:36:24.223533 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.223540 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:24.223546 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:24.223609 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:24.250130 1921402 cri.go:96] found id: ""
	I1222 00:36:24.250145 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.250152 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:24.250159 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:24.250218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:24.275065 1921402 cri.go:96] found id: ""
	I1222 00:36:24.275079 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.275086 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:24.275091 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:24.275147 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:24.299603 1921402 cri.go:96] found id: ""
	I1222 00:36:24.299617 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.299625 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:24.299630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:24.299688 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:24.325078 1921402 cri.go:96] found id: ""
	I1222 00:36:24.325092 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.325099 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:24.325116 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:24.325173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:24.357840 1921402 cri.go:96] found id: ""
	I1222 00:36:24.357854 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.357871 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:24.357877 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:24.357933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:24.384301 1921402 cri.go:96] found id: ""
	I1222 00:36:24.384315 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.384345 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:24.384356 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:24.384371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:24.455452 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:24.455475 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:24.470577 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:24.470593 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:24.536053 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.536310 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:24.536325 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:24.615547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:24.615572 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.151479 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:27.161793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:27.161853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:27.187096 1921402 cri.go:96] found id: ""
	I1222 00:36:27.187110 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.187117 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:27.187123 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:27.187179 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:27.212527 1921402 cri.go:96] found id: ""
	I1222 00:36:27.212541 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.212548 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:27.212553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:27.212613 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:27.241441 1921402 cri.go:96] found id: ""
	I1222 00:36:27.241455 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.241462 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:27.241467 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:27.241522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:27.266764 1921402 cri.go:96] found id: ""
	I1222 00:36:27.266778 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.266786 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:27.266791 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:27.266848 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:27.292091 1921402 cri.go:96] found id: ""
	I1222 00:36:27.292105 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.292113 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:27.292118 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:27.292175 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:27.316799 1921402 cri.go:96] found id: ""
	I1222 00:36:27.316813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.316821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:27.316826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:27.316896 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:27.341725 1921402 cri.go:96] found id: ""
	I1222 00:36:27.341739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.341746 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:27.341754 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:27.341764 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:27.363209 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:27.363227 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:27.435344 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:27.435355 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:27.435367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:27.505547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:27.505565 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.534380 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:27.534395 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.100897 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:30.112047 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:30.112110 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:30.143007 1921402 cri.go:96] found id: ""
	I1222 00:36:30.143021 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.143028 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:30.143034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:30.143091 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:30.168174 1921402 cri.go:96] found id: ""
	I1222 00:36:30.168188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.168196 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:30.168202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:30.168270 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:30.198167 1921402 cri.go:96] found id: ""
	I1222 00:36:30.198181 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.198189 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:30.198194 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:30.198254 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:30.228238 1921402 cri.go:96] found id: ""
	I1222 00:36:30.228252 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.228259 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:30.228265 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:30.228326 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:30.252893 1921402 cri.go:96] found id: ""
	I1222 00:36:30.252907 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.252914 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:30.252920 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:30.252981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:30.282045 1921402 cri.go:96] found id: ""
	I1222 00:36:30.282067 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.282074 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:30.282080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:30.282146 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:30.308008 1921402 cri.go:96] found id: ""
	I1222 00:36:30.308020 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.308027 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:30.308036 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:30.308047 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:30.380173 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:30.380183 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:30.380193 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:30.455922 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:30.455942 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:30.489214 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:30.489236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.560024 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:30.560042 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.076015 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:33.086212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:33.086271 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:33.111816 1921402 cri.go:96] found id: ""
	I1222 00:36:33.111830 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.111837 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:33.111842 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:33.111899 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:33.139689 1921402 cri.go:96] found id: ""
	I1222 00:36:33.139702 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.139709 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:33.139715 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:33.139771 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:33.164725 1921402 cri.go:96] found id: ""
	I1222 00:36:33.164739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.164746 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:33.164752 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:33.164809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:33.191640 1921402 cri.go:96] found id: ""
	I1222 00:36:33.191654 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.191661 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:33.191666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:33.191721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:33.217604 1921402 cri.go:96] found id: ""
	I1222 00:36:33.217618 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.217625 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:33.217631 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:33.217690 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:33.243174 1921402 cri.go:96] found id: ""
	I1222 00:36:33.243188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.243195 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:33.243201 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:33.243256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:33.271929 1921402 cri.go:96] found id: ""
	I1222 00:36:33.271943 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.271950 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:33.271958 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:33.271968 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:33.300147 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:33.300163 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:33.366779 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:33.366797 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.382105 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:33.382122 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:33.445242 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:33.445252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:33.445261 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:36.014121 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:36.025137 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:36.025200 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:36.051061 1921402 cri.go:96] found id: ""
	I1222 00:36:36.051077 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.051085 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:36.051090 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:36.051154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:36.082445 1921402 cri.go:96] found id: ""
	I1222 00:36:36.082458 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.082466 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:36.082471 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:36.082543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:36.108654 1921402 cri.go:96] found id: ""
	I1222 00:36:36.108668 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.108676 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:36.108682 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:36.108746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:36.133581 1921402 cri.go:96] found id: ""
	I1222 00:36:36.133595 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.133602 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:36.133608 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:36.133672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:36.158297 1921402 cri.go:96] found id: ""
	I1222 00:36:36.158311 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.158318 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:36.158324 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:36.158397 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:36.188437 1921402 cri.go:96] found id: ""
	I1222 00:36:36.188451 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.188458 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:36.188464 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:36.188521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:36.213711 1921402 cri.go:96] found id: ""
	I1222 00:36:36.213725 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.213732 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:36.213740 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:36.213752 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:36.241158 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:36.241174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:36.309133 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:36.309152 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:36.323930 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:36.323945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:36.412954 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:36.412965 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:36.412977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:38.983065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:38.993162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:38.993219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:39.019770 1921402 cri.go:96] found id: ""
	I1222 00:36:39.019784 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.019791 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:39.019797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:39.019854 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:39.044991 1921402 cri.go:96] found id: ""
	I1222 00:36:39.045005 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.045012 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:39.045017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:39.045074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:39.070008 1921402 cri.go:96] found id: ""
	I1222 00:36:39.070022 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.070029 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:39.070034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:39.070107 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:39.094755 1921402 cri.go:96] found id: ""
	I1222 00:36:39.094768 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.094776 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:39.094781 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:39.094837 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:39.120054 1921402 cri.go:96] found id: ""
	I1222 00:36:39.120068 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.120075 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:39.120080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:39.120136 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:39.144698 1921402 cri.go:96] found id: ""
	I1222 00:36:39.144711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.144719 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:39.144724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:39.144777 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:39.169125 1921402 cri.go:96] found id: ""
	I1222 00:36:39.169139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.169146 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:39.169153 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:39.169162 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:39.235404 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:39.235423 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:39.250312 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:39.250330 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:39.310727 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:39.310737 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:39.310749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:39.385727 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:39.385745 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:41.917457 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:41.927340 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:41.927400 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:41.952148 1921402 cri.go:96] found id: ""
	I1222 00:36:41.952162 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.952169 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:41.952175 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:41.952231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:41.977527 1921402 cri.go:96] found id: ""
	I1222 00:36:41.977542 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.977549 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:41.977555 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:41.977625 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:42.005444 1921402 cri.go:96] found id: ""
	I1222 00:36:42.005462 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.005470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:42.005476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:42.005551 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:42.041797 1921402 cri.go:96] found id: ""
	I1222 00:36:42.041813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.041820 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:42.041826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:42.041888 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:42.069781 1921402 cri.go:96] found id: ""
	I1222 00:36:42.069797 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.069806 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:42.069812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:42.069880 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:42.104984 1921402 cri.go:96] found id: ""
	I1222 00:36:42.105000 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.105009 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:42.105016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:42.105125 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:42.150604 1921402 cri.go:96] found id: ""
	I1222 00:36:42.150620 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.150629 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:42.150637 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:42.150649 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:42.223375 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:42.223400 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:42.242775 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:42.242802 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:42.318351 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:42.318361 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:42.318371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:42.393293 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:42.393311 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:44.928982 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:44.938908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:44.938968 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:44.963638 1921402 cri.go:96] found id: ""
	I1222 00:36:44.963652 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.963659 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:44.963665 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:44.963723 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:44.988816 1921402 cri.go:96] found id: ""
	I1222 00:36:44.988831 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.988838 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:44.988843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:44.988908 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:45.056183 1921402 cri.go:96] found id: ""
	I1222 00:36:45.056202 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.056212 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:45.056218 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:45.056311 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:45.094388 1921402 cri.go:96] found id: ""
	I1222 00:36:45.094407 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.094416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:45.094422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:45.094503 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:45.141522 1921402 cri.go:96] found id: ""
	I1222 00:36:45.141539 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.141547 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:45.141554 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:45.141632 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:45.184485 1921402 cri.go:96] found id: ""
	I1222 00:36:45.184517 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.184539 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:45.184545 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:45.184691 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:45.244764 1921402 cri.go:96] found id: ""
	I1222 00:36:45.244780 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.244789 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:45.244799 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:45.244810 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:45.323564 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:45.323584 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:45.341732 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:45.341749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:45.426921 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:45.426931 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:45.426941 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:45.496872 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:45.496897 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:48.027828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:48.038486 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:48.038549 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:48.063156 1921402 cri.go:96] found id: ""
	I1222 00:36:48.063170 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.063177 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:48.063183 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:48.063242 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:48.092751 1921402 cri.go:96] found id: ""
	I1222 00:36:48.092764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.092771 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:48.092776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:48.092833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:48.117758 1921402 cri.go:96] found id: ""
	I1222 00:36:48.117771 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.117778 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:48.117784 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:48.117845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:48.156671 1921402 cri.go:96] found id: ""
	I1222 00:36:48.156685 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.156692 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:48.156697 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:48.156759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:48.181935 1921402 cri.go:96] found id: ""
	I1222 00:36:48.181948 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.181961 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:48.181966 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:48.182025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:48.206454 1921402 cri.go:96] found id: ""
	I1222 00:36:48.206468 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.206476 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:48.206481 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:48.206537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:48.230655 1921402 cri.go:96] found id: ""
	I1222 00:36:48.230669 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.230676 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:48.230684 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:48.230694 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:48.297753 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:48.297772 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:48.312560 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:48.312575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:48.405529 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:48.405539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:48.405550 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:48.479216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:48.479235 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.006751 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:51.017983 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:51.018043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:51.045081 1921402 cri.go:96] found id: ""
	I1222 00:36:51.045095 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.045102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:51.045133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:51.045231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:51.072909 1921402 cri.go:96] found id: ""
	I1222 00:36:51.072923 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.072931 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:51.072936 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:51.073002 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:51.101558 1921402 cri.go:96] found id: ""
	I1222 00:36:51.101582 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.101590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:51.101596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:51.101667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:51.128728 1921402 cri.go:96] found id: ""
	I1222 00:36:51.128762 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.128771 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:51.128777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:51.128843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:51.155107 1921402 cri.go:96] found id: ""
	I1222 00:36:51.155123 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.155129 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:51.155135 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:51.155196 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:51.183281 1921402 cri.go:96] found id: ""
	I1222 00:36:51.183295 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.183302 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:51.183308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:51.183365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:51.209022 1921402 cri.go:96] found id: ""
	I1222 00:36:51.209050 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.209057 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:51.209078 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:51.209088 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:51.272611 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:51.272624 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:51.272634 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:51.342544 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:51.342563 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.373712 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:51.373727 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:51.452392 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:51.452419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:53.968915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:53.979167 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:53.979236 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:54.009488 1921402 cri.go:96] found id: ""
	I1222 00:36:54.009513 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.009521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:54.009527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:54.009592 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:54.036904 1921402 cri.go:96] found id: ""
	I1222 00:36:54.036919 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.036927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:54.036932 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:54.036991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:54.065561 1921402 cri.go:96] found id: ""
	I1222 00:36:54.065576 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.065583 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:54.065588 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:54.065651 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:54.092446 1921402 cri.go:96] found id: ""
	I1222 00:36:54.092472 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.092479 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:54.092485 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:54.092554 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:54.124282 1921402 cri.go:96] found id: ""
	I1222 00:36:54.124296 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.124303 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:54.124308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:54.124376 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:54.149069 1921402 cri.go:96] found id: ""
	I1222 00:36:54.149082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.149089 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:54.149095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:54.149150 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:54.173952 1921402 cri.go:96] found id: ""
	I1222 00:36:54.173966 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.173973 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:54.173985 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:54.173996 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:54.243687 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:54.243707 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:54.274613 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:54.274629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:54.347248 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:54.347266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:54.368410 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:54.368429 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:54.442889 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:56.943148 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:56.953207 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:56.953265 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:56.980068 1921402 cri.go:96] found id: ""
	I1222 00:36:56.980082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:56.980089 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:56.980095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:56.980154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:57.009964 1921402 cri.go:96] found id: ""
	I1222 00:36:57.009979 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.009987 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:57.009992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:57.010056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:57.040772 1921402 cri.go:96] found id: ""
	I1222 00:36:57.040788 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.040796 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:57.040801 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:57.040861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:57.066436 1921402 cri.go:96] found id: ""
	I1222 00:36:57.066449 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.066456 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:57.066461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:57.066518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:57.090966 1921402 cri.go:96] found id: ""
	I1222 00:36:57.090980 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.090987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:57.090992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:57.091052 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:57.120134 1921402 cri.go:96] found id: ""
	I1222 00:36:57.120149 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.120156 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:57.120162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:57.120218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:57.145014 1921402 cri.go:96] found id: ""
	I1222 00:36:57.145027 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.145034 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:57.145041 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:57.145052 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:57.214243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:57.214263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:57.248369 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:57.248387 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:57.316316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:57.316337 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:57.331296 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:57.331313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:57.423941 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:59.924191 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:59.934553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:59.934611 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:59.963952 1921402 cri.go:96] found id: ""
	I1222 00:36:59.963967 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.963974 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:59.963980 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:59.964039 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:59.989241 1921402 cri.go:96] found id: ""
	I1222 00:36:59.989254 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.989261 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:59.989267 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:59.989346 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:00.041618 1921402 cri.go:96] found id: ""
	I1222 00:37:00.041645 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.041653 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:00.041659 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:00.041743 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:00.156035 1921402 cri.go:96] found id: ""
	I1222 00:37:00.156051 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.156058 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:00.156066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:00.156135 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:00.209590 1921402 cri.go:96] found id: ""
	I1222 00:37:00.209625 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.209642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:00.209658 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:00.209760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:00.265952 1921402 cri.go:96] found id: ""
	I1222 00:37:00.265968 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.265976 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:00.265982 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:00.266053 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:00.306640 1921402 cri.go:96] found id: ""
	I1222 00:37:00.306669 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.306678 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:00.306692 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:00.306705 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:00.391041 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:00.391062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:00.408447 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:00.408463 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:00.477856 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:00.477866 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:00.477876 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:00.550139 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:00.550158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:03.081702 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:03.091652 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:03.091709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:03.116220 1921402 cri.go:96] found id: ""
	I1222 00:37:03.116234 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.116241 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:03.116247 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:03.116303 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:03.141299 1921402 cri.go:96] found id: ""
	I1222 00:37:03.141313 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.141340 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:03.141345 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:03.141402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:03.166615 1921402 cri.go:96] found id: ""
	I1222 00:37:03.166630 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.166637 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:03.166642 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:03.166706 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:03.196327 1921402 cri.go:96] found id: ""
	I1222 00:37:03.196341 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.196358 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:03.196366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:03.196432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:03.221691 1921402 cri.go:96] found id: ""
	I1222 00:37:03.221705 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.221712 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:03.221717 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:03.221774 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:03.247455 1921402 cri.go:96] found id: ""
	I1222 00:37:03.247470 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.247477 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:03.247482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:03.247540 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:03.277376 1921402 cri.go:96] found id: ""
	I1222 00:37:03.277398 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.277406 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:03.277414 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:03.277424 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:03.345137 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:03.345158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:03.362155 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:03.362174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:03.438149 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:03.438160 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:03.438172 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:03.508243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:03.508262 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.037621 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:06.048285 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:06.048348 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:06.076896 1921402 cri.go:96] found id: ""
	I1222 00:37:06.076911 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.076918 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:06.076924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:06.076985 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:06.102650 1921402 cri.go:96] found id: ""
	I1222 00:37:06.102665 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.102671 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:06.102677 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:06.102732 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:06.127851 1921402 cri.go:96] found id: ""
	I1222 00:37:06.127865 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.127872 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:06.127878 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:06.127934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:06.154598 1921402 cri.go:96] found id: ""
	I1222 00:37:06.154612 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.154619 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:06.154625 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:06.154683 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:06.181128 1921402 cri.go:96] found id: ""
	I1222 00:37:06.181142 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.181149 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:06.181155 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:06.181215 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:06.207749 1921402 cri.go:96] found id: ""
	I1222 00:37:06.207762 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.207770 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:06.207776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:06.207832 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:06.240273 1921402 cri.go:96] found id: ""
	I1222 00:37:06.240287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.240295 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:06.240302 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:06.240313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:06.310353 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:06.310372 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.337909 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:06.337928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:06.413399 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:06.413419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:06.428074 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:06.428091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:06.491546 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:08.991781 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:09.003482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:09.003561 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:09.030801 1921402 cri.go:96] found id: ""
	I1222 00:37:09.030816 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.030823 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:09.030829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:09.030884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:09.063705 1921402 cri.go:96] found id: ""
	I1222 00:37:09.063728 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.063740 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:09.063746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:09.063819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:09.094354 1921402 cri.go:96] found id: ""
	I1222 00:37:09.094370 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.094378 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:09.094384 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:09.094441 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:09.120692 1921402 cri.go:96] found id: ""
	I1222 00:37:09.120706 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.120714 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:09.120719 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:09.120776 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:09.146623 1921402 cri.go:96] found id: ""
	I1222 00:37:09.146642 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.146649 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:09.146654 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:09.146711 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:09.176276 1921402 cri.go:96] found id: ""
	I1222 00:37:09.176290 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.176297 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:09.176302 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:09.176359 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:09.205092 1921402 cri.go:96] found id: ""
	I1222 00:37:09.205106 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.205113 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:09.205120 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:09.205130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:09.271826 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:09.271851 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:09.287713 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:09.287729 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:09.372451 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:09.372461 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:09.372474 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:09.456787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:09.456808 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:11.988321 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:11.998626 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:11.998682 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:12.034498 1921402 cri.go:96] found id: ""
	I1222 00:37:12.034513 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.034521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:12.034527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:12.034588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:12.065008 1921402 cri.go:96] found id: ""
	I1222 00:37:12.065023 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.065033 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:12.065038 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:12.065098 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:12.092442 1921402 cri.go:96] found id: ""
	I1222 00:37:12.092456 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.092463 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:12.092469 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:12.092529 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:12.118395 1921402 cri.go:96] found id: ""
	I1222 00:37:12.118408 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.118416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:12.118421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:12.118476 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:12.143621 1921402 cri.go:96] found id: ""
	I1222 00:37:12.143634 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.143642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:12.143647 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:12.143702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:12.172923 1921402 cri.go:96] found id: ""
	I1222 00:37:12.172937 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.172944 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:12.172949 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:12.173004 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:12.199794 1921402 cri.go:96] found id: ""
	I1222 00:37:12.199808 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.199815 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:12.199822 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:12.199833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:12.263467 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:12.263477 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:12.263488 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:12.334334 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:12.334358 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:12.374086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:12.374104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:12.447671 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:12.447690 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:14.962865 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:14.972963 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:14.973024 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:14.998158 1921402 cri.go:96] found id: ""
	I1222 00:37:14.998172 1921402 logs.go:282] 0 containers: []
	W1222 00:37:14.998179 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:14.998185 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:14.998243 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:15.029851 1921402 cri.go:96] found id: ""
	I1222 00:37:15.029867 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.029874 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:15.029882 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:15.029948 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:15.057647 1921402 cri.go:96] found id: ""
	I1222 00:37:15.057662 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.057668 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:15.057674 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:15.057742 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:15.084111 1921402 cri.go:96] found id: ""
	I1222 00:37:15.084126 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.084134 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:15.084140 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:15.084205 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:15.114963 1921402 cri.go:96] found id: ""
	I1222 00:37:15.114978 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.114984 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:15.114990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:15.115050 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:15.140108 1921402 cri.go:96] found id: ""
	I1222 00:37:15.140122 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.140129 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:15.140134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:15.140192 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:15.169273 1921402 cri.go:96] found id: ""
	I1222 00:37:15.169287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.169294 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:15.169302 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:15.169312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:15.235722 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:15.235741 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:15.250646 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:15.250662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:15.315252 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:15.315262 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:15.315272 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:15.390764 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:15.390784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:17.922554 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:17.932679 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:17.932739 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:17.957865 1921402 cri.go:96] found id: ""
	I1222 00:37:17.957879 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.957886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:17.957891 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:17.957952 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:17.987323 1921402 cri.go:96] found id: ""
	I1222 00:37:17.987337 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.987344 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:17.987349 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:17.987410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:18.017446 1921402 cri.go:96] found id: ""
	I1222 00:37:18.017461 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.017469 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:18.017474 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:18.017538 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:18.043105 1921402 cri.go:96] found id: ""
	I1222 00:37:18.043119 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.043126 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:18.043131 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:18.043188 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:18.069214 1921402 cri.go:96] found id: ""
	I1222 00:37:18.069229 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.069236 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:18.069241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:18.069304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:18.094585 1921402 cri.go:96] found id: ""
	I1222 00:37:18.094598 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.094605 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:18.094610 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:18.094670 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:18.124271 1921402 cri.go:96] found id: ""
	I1222 00:37:18.124292 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.124300 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:18.124307 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:18.124318 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:18.191628 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:18.191645 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:18.206373 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:18.206389 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:18.271908 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:18.271918 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:18.271931 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:18.347020 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:18.347041 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:20.886577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:20.896630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:20.896687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:20.924710 1921402 cri.go:96] found id: ""
	I1222 00:37:20.924724 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.924731 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:20.924737 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:20.924797 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:20.949929 1921402 cri.go:96] found id: ""
	I1222 00:37:20.949944 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.949951 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:20.949956 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:20.950015 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:20.975241 1921402 cri.go:96] found id: ""
	I1222 00:37:20.975254 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.975262 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:20.975273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:20.975332 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:21.007134 1921402 cri.go:96] found id: ""
	I1222 00:37:21.007149 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.007157 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:21.007162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:21.007226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:21.036823 1921402 cri.go:96] found id: ""
	I1222 00:37:21.036837 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.036845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:21.036850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:21.036922 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:21.066186 1921402 cri.go:96] found id: ""
	I1222 00:37:21.066201 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.066208 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:21.066214 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:21.066273 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:21.092594 1921402 cri.go:96] found id: ""
	I1222 00:37:21.092609 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.092616 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:21.092624 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:21.092658 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:21.155743 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:21.155753 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:21.155763 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:21.230114 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:21.230133 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:21.258046 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:21.258062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:21.330592 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:21.330619 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:23.847823 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:23.857930 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:23.857993 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:23.883272 1921402 cri.go:96] found id: ""
	I1222 00:37:23.883286 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.883294 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:23.883300 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:23.883355 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:23.907502 1921402 cri.go:96] found id: ""
	I1222 00:37:23.907515 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.907523 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:23.907528 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:23.907588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:23.932684 1921402 cri.go:96] found id: ""
	I1222 00:37:23.932698 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.932705 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:23.932711 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:23.932769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:23.957636 1921402 cri.go:96] found id: ""
	I1222 00:37:23.957650 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.957657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:23.957662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:23.957719 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:23.982476 1921402 cri.go:96] found id: ""
	I1222 00:37:23.982490 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.982497 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:23.982502 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:23.982560 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:24.014243 1921402 cri.go:96] found id: ""
	I1222 00:37:24.014258 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.014265 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:24.014278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:24.014341 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:24.041726 1921402 cri.go:96] found id: ""
	I1222 00:37:24.041740 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.041748 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:24.041756 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:24.041766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:24.110293 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:24.110312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:24.126712 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:24.126736 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:24.190552 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:24.190564 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:24.190575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:24.262140 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:24.262165 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:26.797351 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:26.808692 1921402 kubeadm.go:602] duration metric: took 4m2.99406476s to restartPrimaryControlPlane
	W1222 00:37:26.808754 1921402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:37:26.808829 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:37:27.218865 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:37:27.231651 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:37:27.239390 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:37:27.239447 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:37:27.246990 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:37:27.247000 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:37:27.247052 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:37:27.254602 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:37:27.254657 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:37:27.261917 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:37:27.269756 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:37:27.269819 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:37:27.277175 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.285102 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:37:27.285164 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.292582 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:37:27.300147 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:37:27.300201 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:37:27.307155 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:37:27.346116 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:37:27.346165 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:37:27.414020 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:37:27.414089 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:37:27.414123 1921402 kubeadm.go:319] OS: Linux
	I1222 00:37:27.414168 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:37:27.414215 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:37:27.414261 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:37:27.414307 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:37:27.414354 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:37:27.414404 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:37:27.414448 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:37:27.414494 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:37:27.414538 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:37:27.481436 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:37:27.481554 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:37:27.481674 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:37:27.489751 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:37:27.493685 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:37:27.493793 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:37:27.493881 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:37:27.493967 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:37:27.494032 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:37:27.494108 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:37:27.494164 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:37:27.494230 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:37:27.494294 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:37:27.494372 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:37:27.494448 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:37:27.494487 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:37:27.494545 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:37:27.957235 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:37:28.242322 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:37:28.521612 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:37:28.976211 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:37:29.260315 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:37:29.260957 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:37:29.263905 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:37:29.266956 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:37:29.267051 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:37:29.267139 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:37:29.268296 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:37:29.283008 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:37:29.283332 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:37:29.290939 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:37:29.291397 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:37:29.291445 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:37:29.423121 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:37:29.423418 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:41:29.424266 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00111221s
	I1222 00:41:29.424291 1921402 kubeadm.go:319] 
	I1222 00:41:29.424347 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:41:29.424379 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:41:29.424483 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:41:29.424487 1921402 kubeadm.go:319] 
	I1222 00:41:29.424591 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:41:29.424622 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:41:29.424652 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:41:29.424655 1921402 kubeadm.go:319] 
	I1222 00:41:29.429048 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:41:29.429508 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:41:29.429617 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:41:29.429851 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:41:29.429855 1921402 kubeadm.go:319] 
	I1222 00:41:29.429925 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:41:29.430059 1921402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00111221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:41:29.430143 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:41:29.837125 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:41:29.849742 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:41:29.849801 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:41:29.857510 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:41:29.857520 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:41:29.857569 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:41:29.865043 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:41:29.865100 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:41:29.872214 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:41:29.879983 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:41:29.880037 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:41:29.887891 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.895656 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:41:29.895711 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.902944 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:41:29.910473 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:41:29.910531 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:41:29.917929 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:41:29.958087 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:41:29.958157 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:41:30.050415 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:41:30.050500 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:41:30.050542 1921402 kubeadm.go:319] OS: Linux
	I1222 00:41:30.050625 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:41:30.050679 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:41:30.050743 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:41:30.050790 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:41:30.050861 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:41:30.050918 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:41:30.050963 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:41:30.051010 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:41:30.051055 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:41:30.118501 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:41:30.118617 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:41:30.118712 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:41:30.125936 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:41:30.127797 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:41:30.127893 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:41:30.129701 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:41:30.129794 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:41:30.129856 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:41:30.129967 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:41:30.130025 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:41:30.130096 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:41:30.130183 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:41:30.130272 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:41:30.130344 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:41:30.130391 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:41:30.130455 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:41:30.344018 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:41:30.899259 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:41:31.157696 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:41:31.268170 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:41:31.664677 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:41:31.665563 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:41:31.668250 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:41:31.669734 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:41:31.669834 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:41:31.670177 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:41:31.671652 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:41:31.687500 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:41:31.687601 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:41:31.694901 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:41:31.695472 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:41:31.695535 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:41:31.830593 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:41:31.830722 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:45:31.830877 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001320631s
	I1222 00:45:31.831182 1921402 kubeadm.go:319] 
	I1222 00:45:31.831295 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:45:31.831352 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:45:31.831682 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:45:31.831690 1921402 kubeadm.go:319] 
	I1222 00:45:31.831881 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:45:31.831936 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:45:31.831990 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:45:31.831994 1921402 kubeadm.go:319] 
	I1222 00:45:31.836486 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:45:31.837376 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:45:31.837502 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:45:31.837782 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:45:31.837816 1921402 kubeadm.go:319] 
	I1222 00:45:31.837891 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:45:31.837949 1921402 kubeadm.go:403] duration metric: took 12m8.05957574s to StartCluster
	I1222 00:45:31.837990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:45:31.838056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:45:31.866824 1921402 cri.go:96] found id: ""
	I1222 00:45:31.866837 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.866844 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:45:31.866850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:45:31.866906 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:45:31.892772 1921402 cri.go:96] found id: ""
	I1222 00:45:31.892797 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.892805 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:45:31.892812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:45:31.892881 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:45:31.919961 1921402 cri.go:96] found id: ""
	I1222 00:45:31.919975 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.919982 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:45:31.919987 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:45:31.920043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:45:31.945683 1921402 cri.go:96] found id: ""
	I1222 00:45:31.945697 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.945704 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:45:31.945710 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:45:31.945770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:45:31.971052 1921402 cri.go:96] found id: ""
	I1222 00:45:31.971066 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.971074 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:45:31.971080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:45:31.971139 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:45:31.997143 1921402 cri.go:96] found id: ""
	I1222 00:45:31.997158 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.997165 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:45:31.997170 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:45:31.997226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:45:32.024698 1921402 cri.go:96] found id: ""
	I1222 00:45:32.024713 1921402 logs.go:282] 0 containers: []
	W1222 00:45:32.024721 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:45:32.024729 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:45:32.024739 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:45:32.092110 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:45:32.092130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:45:32.109516 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:45:32.109532 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:45:32.182932 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:45:32.182942 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:45:32.182952 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:45:32.255366 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:45:32.255386 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:45:32.288418 1921402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:45:32.288449 1921402 out.go:285] * 
	W1222 00:45:32.288511 1921402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.288527 1921402 out.go:285] * 
	W1222 00:45:32.290657 1921402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:45:32.295098 1921402 out.go:203] 
	W1222 00:45:32.297043 1921402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.297089 1921402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:45:32.297111 1921402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:45:32.299905 1921402 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.02679827Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.026861283Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.026928974Z" level=info msg="Create NRI interface"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027081987Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027098405Z" level=info msg="runtime interface created"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027113797Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027130551Z" level=info msg="runtime interface starting up..."
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027141611Z" level=info msg="starting plugins..."
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027158522Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027257579Z" level=info msg="No systemd watchdog enabled"
	Dec 22 00:33:22 functional-837733 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.485049198Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=f8b31a28-1e5e-4924-b750-a0efe7effecd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.485769225Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=96015691-150e-4012-8e1b-65198bb7cc23 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.486269641Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=155cce42-29f5-4d02-81fe-382aac99f970 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.486755321Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=3cd990c1-1fbc-47fd-a924-3e1fddfaba39 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.487198967Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=fb5de215-5f60-4f7a-a012-72665a4d2b87 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.48764071Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=88930c08-aae3-4b3f-862d-918e893427ff name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.488127998Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=0b18c52c-a177-41d0-84f8-a99061f82c45 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.121686935Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=a65ca03c-da33-4282-bf75-37afebaaec0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122407537Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=3242b675-c84d-4560-8bb8-f6d771881c0b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122951619Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=23242d7e-419c-42f0-8bef-ee98feafd2e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123417804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5531d018-c2c9-45be-a1ce-0187a1f831ac name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123836761Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=10e6d9d2-6073-425b-bed1-1e668d53ee0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124244175Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c2632b93-d501-48e1-9830-ac88cf5004de name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124649086Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8dce6ea5-b89a-4098-8ce9-aea233a27c3e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:45:33.497783   21251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:33.498577   21251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:33.500110   21251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:33.500420   21251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:33.501902   21251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:45:33 up 1 day,  7:28,  0 user,  load average: 0.39, 0.28, 0.50
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:45:30 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:31 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 00:45:31 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:31 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:31 functional-837733 kubelet[21059]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:31 functional-837733 kubelet[21059]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:31 functional-837733 kubelet[21059]: E1222 00:45:31.391049   21059 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:31 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:31 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:32 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 00:45:32 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:32 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:32 functional-837733 kubelet[21124]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:32 functional-837733 kubelet[21124]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:32 functional-837733 kubelet[21124]: E1222 00:45:32.157065   21124 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:32 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:32 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:32 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 00:45:32 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:32 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:32 functional-837733 kubelet[21166]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:32 functional-837733 kubelet[21166]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:32 functional-837733 kubelet[21166]: E1222 00:45:32.907858   21166 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:32 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:32 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (339.820823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (735.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-837733 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-837733 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (62.393228ms)

                                                
                                                
** stderr ** 
	E1222 00:45:34.543122 1933440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:34.544613 1933440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:34.545982 1933440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:34.547322 1933440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:34.548673 1933440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-837733 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (298.378793ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-885873 image ls --format table --alsologtostderr                                                                                     │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr                                          │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ update-context │ functional-885873 update-context --alsologtostderr -v=2                                                                                         │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ image          │ functional-885873 image ls                                                                                                                      │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ delete         │ -p functional-885873                                                                                                                            │ functional-885873 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │ 22 Dec 25 00:18 UTC │
	│ start          │ -p functional-837733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:18 UTC │                     │
	│ start          │ -p functional-837733 --alsologtostderr -v=8                                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:26 UTC │                     │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add registry.k8s.io/pause:latest                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache add minikube-local-cache-test:functional-837733                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ functional-837733 cache delete minikube-local-cache-test:functional-837733                                                                      │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl images                                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ cache          │ functional-837733 cache reload                                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh            │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ kubectl        │ functional-837733 kubectl -- --context functional-837733 get pods                                                                               │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ start          │ -p functional-837733 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:33:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:33:19.123176 1921402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:33:19.123288 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123297 1921402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:33:19.123301 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123533 1921402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:33:19.123864 1921402 out.go:368] Setting JSON to false
	I1222 00:33:19.124702 1921402 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112549,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:33:19.124753 1921402 start.go:143] virtualization:  
	I1222 00:33:19.129939 1921402 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:33:19.132820 1921402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:33:19.132934 1921402 notify.go:221] Checking for updates...
	I1222 00:33:19.138397 1921402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:33:19.141280 1921402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:33:19.144132 1921402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:33:19.147015 1921402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:33:19.149703 1921402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:33:19.152994 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:19.153132 1921402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:33:19.180144 1921402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:33:19.180247 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.239273 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.229718138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.239382 1921402 docker.go:319] overlay module found
	I1222 00:33:19.242404 1921402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:33:19.245466 1921402 start.go:309] selected driver: docker
	I1222 00:33:19.245476 1921402 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.245601 1921402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:33:19.245707 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.303768 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.294614837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.304252 1921402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:33:19.304275 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:19.304327 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:19.304366 1921402 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.307509 1921402 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:33:19.310426 1921402 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:33:19.313438 1921402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:33:19.316296 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:19.316334 1921402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:33:19.316337 1921402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:33:19.316345 1921402 cache.go:65] Caching tarball of preloaded images
	I1222 00:33:19.316452 1921402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:33:19.316461 1921402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:33:19.316560 1921402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:33:19.335533 1921402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:33:19.335544 1921402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:33:19.335564 1921402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:33:19.335600 1921402 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:33:19.335668 1921402 start.go:364] duration metric: took 47.728µs to acquireMachinesLock for "functional-837733"
	I1222 00:33:19.335688 1921402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:33:19.335699 1921402 fix.go:54] fixHost starting: 
	I1222 00:33:19.335947 1921402 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:33:19.365010 1921402 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:33:19.365031 1921402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:33:19.368348 1921402 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:33:19.368376 1921402 machine.go:94] provisionDockerMachine start ...
	I1222 00:33:19.368449 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.391712 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.392069 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.392078 1921402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:33:19.528918 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.528932 1921402 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:33:19.529008 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.546388 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.546693 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.546705 1921402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:33:19.690177 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.690264 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.708307 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.708609 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.708623 1921402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:33:19.841808 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:33:19.841826 1921402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:33:19.841843 1921402 ubuntu.go:190] setting up certificates
	I1222 00:33:19.841860 1921402 provision.go:84] configureAuth start
	I1222 00:33:19.841918 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:19.861819 1921402 provision.go:143] copyHostCerts
	I1222 00:33:19.861874 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:33:19.861889 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:33:19.861964 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:33:19.862062 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:33:19.862066 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:33:19.862089 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:33:19.862134 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:33:19.862137 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:33:19.862158 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:33:19.862198 1921402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:33:19.952084 1921402 provision.go:177] copyRemoteCerts
	I1222 00:33:19.952138 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:33:19.952182 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.970262 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.085780 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:33:20.107727 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:33:20.128189 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:33:20.148142 1921402 provision.go:87] duration metric: took 306.259546ms to configureAuth
	I1222 00:33:20.148170 1921402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:33:20.148393 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:20.148508 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.170002 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:20.170298 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:20.170316 1921402 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:33:20.512165 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:33:20.512178 1921402 machine.go:97] duration metric: took 1.143796307s to provisionDockerMachine
	I1222 00:33:20.512211 1921402 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:33:20.512232 1921402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:33:20.512310 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:33:20.512363 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.530913 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.629793 1921402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:33:20.633223 1921402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:33:20.633240 1921402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:33:20.633250 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:33:20.633308 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:33:20.633414 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:33:20.633491 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:33:20.633538 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:33:20.641126 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:20.658985 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:33:20.677182 1921402 start.go:296] duration metric: took 164.956569ms for postStartSetup
	I1222 00:33:20.677276 1921402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:33:20.677314 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.694936 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.786654 1921402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:33:20.791424 1921402 fix.go:56] duration metric: took 1.455724646s for fixHost
	I1222 00:33:20.791440 1921402 start.go:83] releasing machines lock for "functional-837733", held for 1.455763874s
	I1222 00:33:20.791545 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:20.808426 1921402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:33:20.808469 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.808475 1921402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:33:20.808537 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.830636 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.834199 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.933012 1921402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:33:21.025436 1921402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:33:21.067278 1921402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:33:21.071776 1921402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:33:21.071868 1921402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:33:21.079967 1921402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:33:21.079982 1921402 start.go:496] detecting cgroup driver to use...
	I1222 00:33:21.080013 1921402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:33:21.080062 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:33:21.095680 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:33:21.108798 1921402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:33:21.108852 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:33:21.124535 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:33:21.138031 1921402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:33:21.254740 1921402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:33:21.374618 1921402 docker.go:234] disabling docker service ...
	I1222 00:33:21.374676 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:33:21.389413 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:33:21.402447 1921402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:33:21.520019 1921402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:33:21.643581 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:33:21.657853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:33:21.671138 1921402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:33:21.671202 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.680093 1921402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:33:21.680158 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.688698 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.697375 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.705901 1921402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:33:21.713923 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.722872 1921402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.731197 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.739610 1921402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:33:21.747277 1921402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:33:21.754578 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:21.862305 1921402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:33:22.033055 1921402 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:33:22.033138 1921402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:33:22.037243 1921402 start.go:564] Will wait 60s for crictl version
	I1222 00:33:22.037298 1921402 ssh_runner.go:195] Run: which crictl
	I1222 00:33:22.040800 1921402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:33:22.068746 1921402 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:33:22.068856 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.098980 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.132885 1921402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:33:22.135905 1921402 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:33:22.151401 1921402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:33:22.158212 1921402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:33:22.160827 1921402 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:33:22.160965 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:22.161027 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.202970 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.202982 1921402 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:33:22.203036 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.229110 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.229123 1921402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:33:22.229129 1921402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:33:22.229228 1921402 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:33:22.229311 1921402 ssh_runner.go:195] Run: crio config
	I1222 00:33:22.300174 1921402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:33:22.300213 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:22.300224 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:22.300232 1921402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:33:22.300257 1921402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:33:22.300399 1921402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:33:22.300509 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:33:22.308005 1921402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:33:22.308080 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:33:22.315446 1921402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:33:22.327549 1921402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:33:22.340301 1921402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1222 00:33:22.356214 1921402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:33:22.360143 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:22.489828 1921402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:33:23.096445 1921402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:33:23.096455 1921402 certs.go:195] generating shared ca certs ...
	I1222 00:33:23.096469 1921402 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:33:23.096601 1921402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:33:23.096676 1921402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:33:23.096682 1921402 certs.go:257] generating profile certs ...
	I1222 00:33:23.096765 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:33:23.096814 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:33:23.096847 1921402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:33:23.097000 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:33:23.097032 1921402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:33:23.097039 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:33:23.097063 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:33:23.097085 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:33:23.097109 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:33:23.097150 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:23.097934 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:33:23.122733 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:33:23.143796 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:33:23.165783 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:33:23.184349 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:33:23.201211 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:33:23.218537 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:33:23.235950 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:33:23.253260 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:33:23.270022 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:33:23.287385 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:33:23.304167 1921402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:33:23.316581 1921402 ssh_runner.go:195] Run: openssl version
	I1222 00:33:23.322651 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.329731 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:33:23.336799 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340358 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340410 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.381351 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:33:23.388858 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.396030 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:33:23.403636 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407354 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407408 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.449165 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:33:23.456643 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.463729 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:33:23.471167 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474893 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474950 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.516093 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:33:23.523732 1921402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:33:23.527686 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:33:23.568782 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:33:23.609619 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:33:23.650314 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:33:23.696111 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:33:23.737413 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:33:23.778391 1921402 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:23.778492 1921402 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:33:23.778592 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.806800 1921402 cri.go:96] found id: ""
	I1222 00:33:23.806880 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:33:23.814609 1921402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:33:23.814618 1921402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:33:23.814668 1921402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:33:23.821980 1921402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.822486 1921402 kubeconfig.go:125] found "functional-837733" server: "https://192.168.49.2:8441"
	I1222 00:33:23.823779 1921402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:33:23.831691 1921402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:18:48.647493207 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:33:22.349448241 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:33:23.831708 1921402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:33:23.831719 1921402 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1222 00:33:23.831773 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.866496 1921402 cri.go:96] found id: ""
	I1222 00:33:23.866558 1921402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:33:23.889134 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:33:23.897048 1921402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 22 00:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 00:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:22 /etc/kubernetes/scheduler.conf
	
	I1222 00:33:23.897106 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:33:23.904740 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:33:23.912683 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.912741 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:33:23.920107 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.927907 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.927961 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.935547 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:33:23.943201 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.943254 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:33:23.950967 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:33:23.959359 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:24.006017 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.352657 1921402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346615684s)
	I1222 00:33:25.352716 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.546706 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.608812 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.656813 1921402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:33:25.656915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.157087 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.657071 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.157877 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.657904 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.657224 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.157645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.657086 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.157778 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.657101 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.657140 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.157596 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.657091 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.157117 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.657826 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.157144 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.657188 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.157633 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.657483 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.157012 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.657072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.657088 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.157578 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.157973 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.657614 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.157096 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.657134 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.157828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.657830 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.157616 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.657109 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.157582 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.657114 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.657017 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.157935 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.657589 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.157909 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.657385 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.157576 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.657671 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.657099 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.157849 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.657233 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.157065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.657643 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.157064 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.657808 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.157659 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.657443 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.157202 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.657314 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.157042 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.657861 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.157451 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.657599 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.157067 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.657586 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.157353 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.656976 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.181504 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.657937 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.657874 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.157195 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.657832 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.157502 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.657928 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.158003 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.657704 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.157139 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.657745 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.157058 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.657102 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.157987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.658025 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.157337 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.656989 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.157695 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.657652 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.157707 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.657722 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.157334 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.657276 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.157900 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.657113 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.157804 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.657609 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.157350 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.657597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.157180 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.658004 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.157770 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.657721 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.157789 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.657835 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.657645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.157603 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.657417 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.157127 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.157737 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.657060 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.157985 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.657010 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.657970 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.657732 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.157680 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.657000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:25.657101 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:25.682430 1921402 cri.go:96] found id: ""
	I1222 00:34:25.682444 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.682451 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:25.682457 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:25.682521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:25.708391 1921402 cri.go:96] found id: ""
	I1222 00:34:25.708406 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.708414 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:25.708420 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:25.708495 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:25.735246 1921402 cri.go:96] found id: ""
	I1222 00:34:25.735261 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.735268 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:25.735273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:25.735330 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:25.764584 1921402 cri.go:96] found id: ""
	I1222 00:34:25.764599 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.764606 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:25.764611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:25.764672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:25.790407 1921402 cri.go:96] found id: ""
	I1222 00:34:25.790421 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.790428 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:25.790434 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:25.790492 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:25.816256 1921402 cri.go:96] found id: ""
	I1222 00:34:25.816270 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.816278 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:25.816283 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:25.816345 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:25.842813 1921402 cri.go:96] found id: ""
	I1222 00:34:25.842828 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.842835 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:25.842843 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:25.842853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:25.914526 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:25.914546 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:25.948008 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:25.948024 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:26.016574 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:26.016596 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:26.032648 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:26.032666 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:26.102628 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.604431 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:28.618535 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:28.618599 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:28.647904 1921402 cri.go:96] found id: ""
	I1222 00:34:28.647920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.647927 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:28.647933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:28.647987 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:28.674397 1921402 cri.go:96] found id: ""
	I1222 00:34:28.674411 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.674418 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:28.674423 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:28.674481 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:28.704093 1921402 cri.go:96] found id: ""
	I1222 00:34:28.704107 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.704114 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:28.704129 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:28.704187 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:28.729449 1921402 cri.go:96] found id: ""
	I1222 00:34:28.729463 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.729470 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:28.729475 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:28.729535 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:28.754277 1921402 cri.go:96] found id: ""
	I1222 00:34:28.754292 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.754300 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:28.754305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:28.754366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:28.779419 1921402 cri.go:96] found id: ""
	I1222 00:34:28.779445 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.779453 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:28.779459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:28.779526 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:28.804826 1921402 cri.go:96] found id: ""
	I1222 00:34:28.804841 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.804848 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:28.804855 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:28.804866 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:28.871702 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:28.871720 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:28.886327 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:28.886342 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:28.950915 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.950935 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:28.950945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:29.021787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:29.021807 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.549965 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:31.559848 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:31.559909 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:31.586473 1921402 cri.go:96] found id: ""
	I1222 00:34:31.586487 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.586494 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:31.586500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:31.586557 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:31.619635 1921402 cri.go:96] found id: ""
	I1222 00:34:31.619649 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.619657 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:31.619662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:31.619718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:31.649990 1921402 cri.go:96] found id: ""
	I1222 00:34:31.650004 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.650011 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:31.650017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:31.650079 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:31.676339 1921402 cri.go:96] found id: ""
	I1222 00:34:31.676354 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.676361 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:31.676366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:31.676424 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:31.704084 1921402 cri.go:96] found id: ""
	I1222 00:34:31.704098 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.704105 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:31.704110 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:31.704170 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:31.728706 1921402 cri.go:96] found id: ""
	I1222 00:34:31.728720 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.728727 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:31.728732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:31.728790 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:31.756276 1921402 cri.go:96] found id: ""
	I1222 00:34:31.756290 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.756297 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:31.756304 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:31.756314 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:31.825066 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:31.825085 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.853826 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:31.853841 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:31.920106 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:31.920125 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:31.934967 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:31.934983 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:31.996347 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.497426 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:34.507616 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:34.507676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:34.531742 1921402 cri.go:96] found id: ""
	I1222 00:34:34.531755 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.531763 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:34.531769 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:34.531843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:34.559670 1921402 cri.go:96] found id: ""
	I1222 00:34:34.559683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.559691 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:34.559695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:34.559762 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:34.587516 1921402 cri.go:96] found id: ""
	I1222 00:34:34.587531 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.587538 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:34.587543 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:34.587621 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:34.618669 1921402 cri.go:96] found id: ""
	I1222 00:34:34.618683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.618690 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:34.618695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:34.618754 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:34.655347 1921402 cri.go:96] found id: ""
	I1222 00:34:34.655361 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.655368 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:34.655373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:34.655429 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:34.680612 1921402 cri.go:96] found id: ""
	I1222 00:34:34.680626 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.680643 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:34.680648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:34.680718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:34.706049 1921402 cri.go:96] found id: ""
	I1222 00:34:34.706070 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.706078 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:34.706085 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:34.706095 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:34.774755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:34.774774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:34.789897 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:34.789917 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:34.854675 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.854686 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:34.854696 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:34.923449 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:34.923469 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.451135 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:37.461580 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:37.461643 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:37.488448 1921402 cri.go:96] found id: ""
	I1222 00:34:37.488474 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.488482 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:37.488488 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:37.488544 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:37.513499 1921402 cri.go:96] found id: ""
	I1222 00:34:37.513514 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.513526 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:37.513531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:37.513595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:37.539878 1921402 cri.go:96] found id: ""
	I1222 00:34:37.539892 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.539899 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:37.539905 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:37.539979 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:37.568838 1921402 cri.go:96] found id: ""
	I1222 00:34:37.568853 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.568860 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:37.568866 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:37.568934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:37.604110 1921402 cri.go:96] found id: ""
	I1222 00:34:37.604124 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.604131 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:37.604156 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:37.604212 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:37.639214 1921402 cri.go:96] found id: ""
	I1222 00:34:37.639228 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.639235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:37.639241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:37.639301 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:37.674749 1921402 cri.go:96] found id: ""
	I1222 00:34:37.674763 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.674771 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:37.674779 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:37.674790 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.709493 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:37.709513 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:37.781144 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:37.781164 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:37.797445 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:37.797461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:37.864668 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:37.864678 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:37.864688 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.444815 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:40.456172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:40.456239 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:40.482752 1921402 cri.go:96] found id: ""
	I1222 00:34:40.482774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.482781 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:40.482786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:40.482846 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:40.508000 1921402 cri.go:96] found id: ""
	I1222 00:34:40.508014 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.508021 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:40.508026 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:40.508087 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:40.533866 1921402 cri.go:96] found id: ""
	I1222 00:34:40.533881 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.533889 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:40.533894 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:40.533955 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:40.559781 1921402 cri.go:96] found id: ""
	I1222 00:34:40.559794 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.559802 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:40.559807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:40.559866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:40.586989 1921402 cri.go:96] found id: ""
	I1222 00:34:40.587003 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.587010 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:40.587016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:40.587074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:40.619369 1921402 cri.go:96] found id: ""
	I1222 00:34:40.619384 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.619400 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:40.619405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:40.619462 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:40.648846 1921402 cri.go:96] found id: ""
	I1222 00:34:40.648870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.648878 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:40.648886 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:40.648896 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:40.720161 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:40.720180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:40.736116 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:40.736141 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:40.808179 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:40.808193 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:40.808204 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.876486 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:40.876505 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.409595 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:43.419504 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:43.419564 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:43.444338 1921402 cri.go:96] found id: ""
	I1222 00:34:43.444360 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.444368 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:43.444373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:43.444430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:43.471938 1921402 cri.go:96] found id: ""
	I1222 00:34:43.471953 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.471960 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:43.471965 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:43.472025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:43.497576 1921402 cri.go:96] found id: ""
	I1222 00:34:43.497590 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.497597 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:43.497611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:43.497667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:43.526748 1921402 cri.go:96] found id: ""
	I1222 00:34:43.526762 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.526769 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:43.526775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:43.526838 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:43.556676 1921402 cri.go:96] found id: ""
	I1222 00:34:43.556691 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.556698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:43.556704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:43.556766 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:43.584425 1921402 cri.go:96] found id: ""
	I1222 00:34:43.584440 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.584447 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:43.584453 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:43.584514 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:43.619760 1921402 cri.go:96] found id: ""
	I1222 00:34:43.619774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.619782 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:43.619789 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:43.619813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:43.693963 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:43.693985 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.722757 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:43.722773 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:43.790316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:43.790336 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:43.804810 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:43.804829 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:43.866141 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.366416 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:46.376013 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:46.376073 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:46.402801 1921402 cri.go:96] found id: ""
	I1222 00:34:46.402815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.402822 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:46.402828 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:46.402884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:46.431904 1921402 cri.go:96] found id: ""
	I1222 00:34:46.431920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.431927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:46.431933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:46.431991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:46.456574 1921402 cri.go:96] found id: ""
	I1222 00:34:46.456588 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.456594 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:46.456600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:46.456656 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:46.481542 1921402 cri.go:96] found id: ""
	I1222 00:34:46.481556 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.481563 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:46.481569 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:46.481627 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:46.507668 1921402 cri.go:96] found id: ""
	I1222 00:34:46.507682 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.507689 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:46.507694 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:46.507756 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:46.531662 1921402 cri.go:96] found id: ""
	I1222 00:34:46.531675 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.531683 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:46.531688 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:46.531746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:46.558345 1921402 cri.go:96] found id: ""
	I1222 00:34:46.558359 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.558367 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:46.558375 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:46.558385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:46.630348 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:46.630367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:46.648136 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:46.648153 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:46.720659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.720669 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:46.720689 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:46.790237 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:46.790255 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.320987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:49.331027 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:49.331083 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:49.355873 1921402 cri.go:96] found id: ""
	I1222 00:34:49.355887 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.355903 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:49.355909 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:49.355967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:49.381127 1921402 cri.go:96] found id: ""
	I1222 00:34:49.381151 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.381158 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:49.381163 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:49.381231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:49.407961 1921402 cri.go:96] found id: ""
	I1222 00:34:49.407983 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.407990 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:49.407995 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:49.408062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:49.437856 1921402 cri.go:96] found id: ""
	I1222 00:34:49.437870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.437877 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:49.437883 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:49.437940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:49.462901 1921402 cri.go:96] found id: ""
	I1222 00:34:49.462914 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.462921 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:49.462927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:49.462981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:49.487289 1921402 cri.go:96] found id: ""
	I1222 00:34:49.487303 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.487309 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:49.487314 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:49.487372 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:49.513186 1921402 cri.go:96] found id: ""
	I1222 00:34:49.513200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.513208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:49.513215 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:49.513225 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:49.527889 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:49.527904 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:49.591170 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:49.591180 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:49.591189 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:49.670799 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:49.670817 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.698837 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:49.698853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.268261 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:52.278246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:52.278304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:52.303014 1921402 cri.go:96] found id: ""
	I1222 00:34:52.303035 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.303043 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:52.303048 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:52.303114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:52.331907 1921402 cri.go:96] found id: ""
	I1222 00:34:52.331921 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.331928 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:52.331933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:52.331989 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:52.357227 1921402 cri.go:96] found id: ""
	I1222 00:34:52.357240 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.357247 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:52.357253 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:52.357315 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:52.383730 1921402 cri.go:96] found id: ""
	I1222 00:34:52.383745 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.383752 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:52.383758 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:52.383819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:52.409605 1921402 cri.go:96] found id: ""
	I1222 00:34:52.409620 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.409636 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:52.409641 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:52.409701 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:52.438817 1921402 cri.go:96] found id: ""
	I1222 00:34:52.438831 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.438838 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:52.438843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:52.438901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:52.464983 1921402 cri.go:96] found id: ""
	I1222 00:34:52.465007 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.465016 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:52.465027 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:52.465038 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:52.533460 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:52.533470 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:52.533493 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:52.602597 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:52.602617 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:52.636250 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:52.636266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.708981 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:52.709002 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.224727 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:55.235168 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:55.235225 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:55.260911 1921402 cri.go:96] found id: ""
	I1222 00:34:55.260926 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.260933 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:55.260938 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:55.260994 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:55.286615 1921402 cri.go:96] found id: ""
	I1222 00:34:55.286634 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.286651 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:55.286657 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:55.286736 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:55.311470 1921402 cri.go:96] found id: ""
	I1222 00:34:55.311484 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.311492 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:55.311497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:55.311555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:55.336659 1921402 cri.go:96] found id: ""
	I1222 00:34:55.336673 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.336689 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:55.336695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:55.336755 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:55.368789 1921402 cri.go:96] found id: ""
	I1222 00:34:55.368815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.368823 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:55.368829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:55.368895 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:55.394360 1921402 cri.go:96] found id: ""
	I1222 00:34:55.394374 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.394381 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:55.394387 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:55.394448 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:55.419182 1921402 cri.go:96] found id: ""
	I1222 00:34:55.419196 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.419204 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:55.419211 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:55.419224 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:55.486298 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:55.486316 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.501160 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:55.501179 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:55.569075 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:55.569085 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:55.569105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:55.643182 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:55.643207 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.182663 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:58.193476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:58.193537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:58.219859 1921402 cri.go:96] found id: ""
	I1222 00:34:58.219873 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.219880 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:58.219886 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:58.219944 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:58.245686 1921402 cri.go:96] found id: ""
	I1222 00:34:58.245700 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.245707 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:58.245712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:58.245769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:58.272173 1921402 cri.go:96] found id: ""
	I1222 00:34:58.272187 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.272194 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:58.272199 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:58.272258 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:58.296505 1921402 cri.go:96] found id: ""
	I1222 00:34:58.296519 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.296526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:58.296532 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:58.296590 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:58.321897 1921402 cri.go:96] found id: ""
	I1222 00:34:58.321911 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.321919 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:58.321924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:58.321981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:58.347185 1921402 cri.go:96] found id: ""
	I1222 00:34:58.347200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.347206 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:58.347212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:58.347272 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:58.371593 1921402 cri.go:96] found id: ""
	I1222 00:34:58.371607 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.371614 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:58.371621 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:58.371631 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:58.439841 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:58.439861 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.471817 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:58.471833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:58.540043 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:58.540062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:58.556739 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:58.556755 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:58.632107 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.133446 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:01.145134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:01.145201 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:01.174836 1921402 cri.go:96] found id: ""
	I1222 00:35:01.174853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.174861 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:01.174867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:01.174943 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:01.202691 1921402 cri.go:96] found id: ""
	I1222 00:35:01.202706 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.202715 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:01.202721 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:01.202785 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:01.232555 1921402 cri.go:96] found id: ""
	I1222 00:35:01.232578 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.232586 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:01.232592 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:01.232687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:01.260828 1921402 cri.go:96] found id: ""
	I1222 00:35:01.260853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.260861 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:01.260867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:01.260954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:01.290608 1921402 cri.go:96] found id: ""
	I1222 00:35:01.290623 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.290630 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:01.290635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:01.290721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:01.321704 1921402 cri.go:96] found id: ""
	I1222 00:35:01.321719 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.321726 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:01.321732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:01.321802 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:01.349512 1921402 cri.go:96] found id: ""
	I1222 00:35:01.349526 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.349533 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:01.349540 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:01.349551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:01.385559 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:01.385575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:01.457873 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:01.457900 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:01.473910 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:01.473928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:01.540419 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.540431 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:01.540442 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.114962 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:04.126202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:04.126262 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:04.157781 1921402 cri.go:96] found id: ""
	I1222 00:35:04.157795 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.157802 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:04.157807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:04.157868 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:04.185090 1921402 cri.go:96] found id: ""
	I1222 00:35:04.185104 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.185114 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:04.185119 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:04.185189 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:04.210432 1921402 cri.go:96] found id: ""
	I1222 00:35:04.210446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.210453 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:04.210458 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:04.210518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:04.235769 1921402 cri.go:96] found id: ""
	I1222 00:35:04.235783 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.235791 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:04.235797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:04.235853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:04.260844 1921402 cri.go:96] found id: ""
	I1222 00:35:04.260858 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.260867 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:04.260872 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:04.260940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:04.286088 1921402 cri.go:96] found id: ""
	I1222 00:35:04.286103 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.286110 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:04.286115 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:04.286171 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:04.311902 1921402 cri.go:96] found id: ""
	I1222 00:35:04.311916 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.311923 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:04.311930 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:04.311945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:04.383800 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:04.383820 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:04.398944 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:04.398960 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:04.463648 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:04.463658 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:04.463668 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.533070 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:04.533091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:07.062949 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:07.072997 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:07.073058 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:07.104724 1921402 cri.go:96] found id: ""
	I1222 00:35:07.104739 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.104746 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:07.104751 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:07.104808 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:07.142683 1921402 cri.go:96] found id: ""
	I1222 00:35:07.142697 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.142704 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:07.142712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:07.142772 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:07.168569 1921402 cri.go:96] found id: ""
	I1222 00:35:07.168583 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.168590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:07.168596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:07.168654 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:07.196279 1921402 cri.go:96] found id: ""
	I1222 00:35:07.196295 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.196302 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:07.196307 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:07.196366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:07.220862 1921402 cri.go:96] found id: ""
	I1222 00:35:07.220876 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.220883 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:07.220888 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:07.220967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:07.247783 1921402 cri.go:96] found id: ""
	I1222 00:35:07.247797 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.247804 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:07.247810 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:07.247866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:07.272632 1921402 cri.go:96] found id: ""
	I1222 00:35:07.272658 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.272668 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:07.272675 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:07.272685 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:07.342006 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:07.342026 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:07.357444 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:07.357461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:07.428354 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:07.428365 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:07.428376 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:07.499031 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:07.499051 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.035662 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:10.047188 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:10.047256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:10.074669 1921402 cri.go:96] found id: ""
	I1222 00:35:10.074694 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.074702 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:10.074708 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:10.074775 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:10.111688 1921402 cri.go:96] found id: ""
	I1222 00:35:10.111702 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.111710 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:10.111716 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:10.111816 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:10.141600 1921402 cri.go:96] found id: ""
	I1222 00:35:10.141614 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.141622 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:10.141635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:10.141697 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:10.170620 1921402 cri.go:96] found id: ""
	I1222 00:35:10.170635 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.170642 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:10.170648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:10.170709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:10.196076 1921402 cri.go:96] found id: ""
	I1222 00:35:10.196090 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.196098 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:10.196111 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:10.196173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:10.222435 1921402 cri.go:96] found id: ""
	I1222 00:35:10.222449 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.222456 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:10.222461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:10.222518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:10.248319 1921402 cri.go:96] found id: ""
	I1222 00:35:10.248333 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.248350 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:10.248356 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:10.248367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:10.263245 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:10.263267 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:10.325942 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:10.325966 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:10.325977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:10.395942 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:10.395961 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.423671 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:10.423686 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:12.990380 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:13.000511 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:13.000585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:13.028285 1921402 cri.go:96] found id: ""
	I1222 00:35:13.028299 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.028306 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:13.028311 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:13.028380 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:13.055795 1921402 cri.go:96] found id: ""
	I1222 00:35:13.055809 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.055816 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:13.055821 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:13.055878 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:13.083562 1921402 cri.go:96] found id: ""
	I1222 00:35:13.083576 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.083584 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:13.083589 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:13.083647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:13.123340 1921402 cri.go:96] found id: ""
	I1222 00:35:13.123353 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.123360 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:13.123366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:13.123422 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:13.160293 1921402 cri.go:96] found id: ""
	I1222 00:35:13.160306 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.160314 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:13.160321 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:13.160393 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:13.186389 1921402 cri.go:96] found id: ""
	I1222 00:35:13.186403 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.186410 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:13.186415 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:13.186470 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:13.214187 1921402 cri.go:96] found id: ""
	I1222 00:35:13.214202 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.214208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:13.214216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:13.214228 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:13.241162 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:13.241178 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:13.306852 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:13.306871 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:13.322142 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:13.322158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:13.389384 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:13.389394 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:13.389419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:15.963833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:15.973870 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:15.973930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:15.999121 1921402 cri.go:96] found id: ""
	I1222 00:35:15.999135 1921402 logs.go:282] 0 containers: []
	W1222 00:35:15.999142 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:15.999147 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:15.999203 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:16.027303 1921402 cri.go:96] found id: ""
	I1222 00:35:16.027317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.027324 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:16.027330 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:16.027390 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:16.056726 1921402 cri.go:96] found id: ""
	I1222 00:35:16.056740 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.056747 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:16.056753 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:16.056809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:16.082920 1921402 cri.go:96] found id: ""
	I1222 00:35:16.082934 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.082941 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:16.082946 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:16.083011 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:16.120145 1921402 cri.go:96] found id: ""
	I1222 00:35:16.120160 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.120167 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:16.120172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:16.120228 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:16.149800 1921402 cri.go:96] found id: ""
	I1222 00:35:16.149814 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.149821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:16.149827 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:16.149883 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:16.175126 1921402 cri.go:96] found id: ""
	I1222 00:35:16.175140 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.175147 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:16.175155 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:16.175168 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:16.241780 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:16.241800 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:16.256196 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:16.256212 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:16.317803 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:16.317813 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:16.317824 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:16.386643 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:16.386662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:18.918855 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:18.928947 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:18.929009 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:18.954131 1921402 cri.go:96] found id: ""
	I1222 00:35:18.954151 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.954158 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:18.954169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:18.954229 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:18.978729 1921402 cri.go:96] found id: ""
	I1222 00:35:18.978780 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.978788 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:18.978793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:18.978859 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:19.006460 1921402 cri.go:96] found id: ""
	I1222 00:35:19.006474 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.006482 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:19.006487 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:19.006547 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:19.035802 1921402 cri.go:96] found id: ""
	I1222 00:35:19.035818 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.035825 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:19.035830 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:19.035889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:19.061703 1921402 cri.go:96] found id: ""
	I1222 00:35:19.061716 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.061723 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:19.061729 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:19.061789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:19.094243 1921402 cri.go:96] found id: ""
	I1222 00:35:19.094256 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.094264 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:19.094269 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:19.094324 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:19.135211 1921402 cri.go:96] found id: ""
	I1222 00:35:19.135238 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.135246 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:19.135253 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:19.135263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:19.203366 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:19.203376 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:19.203385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:19.275567 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:19.275585 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:19.303090 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:19.303105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:19.372108 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:19.372127 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:21.888736 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:21.899401 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:21.899461 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:21.924030 1921402 cri.go:96] found id: ""
	I1222 00:35:21.924045 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.924052 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:21.924057 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:21.924114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:21.950593 1921402 cri.go:96] found id: ""
	I1222 00:35:21.950606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.950613 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:21.950618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:21.950676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:21.978354 1921402 cri.go:96] found id: ""
	I1222 00:35:21.978368 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.978375 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:21.978380 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:21.978437 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:22.005709 1921402 cri.go:96] found id: ""
	I1222 00:35:22.005724 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.005734 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:22.005741 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:22.005807 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:22.033771 1921402 cri.go:96] found id: ""
	I1222 00:35:22.033785 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.033793 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:22.033799 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:22.033862 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:22.063584 1921402 cri.go:96] found id: ""
	I1222 00:35:22.063598 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.063606 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:22.063611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:22.063672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:22.090629 1921402 cri.go:96] found id: ""
	I1222 00:35:22.090643 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.090651 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:22.090658 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:22.090669 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:22.172686 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:22.172713 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:22.192440 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:22.192459 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:22.261299 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:22.261309 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:22.261339 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:22.337282 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:22.337301 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:24.868746 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:24.878846 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:24.878904 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:24.903813 1921402 cri.go:96] found id: ""
	I1222 00:35:24.903828 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.903835 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:24.903841 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:24.903901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:24.930142 1921402 cri.go:96] found id: ""
	I1222 00:35:24.930157 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.930164 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:24.930169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:24.930224 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:24.954989 1921402 cri.go:96] found id: ""
	I1222 00:35:24.955009 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.955016 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:24.955022 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:24.955082 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:24.984399 1921402 cri.go:96] found id: ""
	I1222 00:35:24.984413 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.984421 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:24.984426 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:24.984489 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:25.016621 1921402 cri.go:96] found id: ""
	I1222 00:35:25.016637 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.016644 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:25.016650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:25.016714 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:25.044807 1921402 cri.go:96] found id: ""
	I1222 00:35:25.044821 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.044830 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:25.044835 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:25.044926 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:25.071499 1921402 cri.go:96] found id: ""
	I1222 00:35:25.071514 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.071521 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:25.071530 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:25.071541 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:25.149156 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:25.149180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:25.168790 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:25.168813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:25.235601 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:25.235611 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:25.235622 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:25.306834 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:25.306854 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:27.843405 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:27.854516 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:27.854584 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:27.881946 1921402 cri.go:96] found id: ""
	I1222 00:35:27.881960 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.881967 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:27.881972 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:27.882034 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:27.911889 1921402 cri.go:96] found id: ""
	I1222 00:35:27.911903 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.911922 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:27.911927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:27.911984 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:27.938642 1921402 cri.go:96] found id: ""
	I1222 00:35:27.938656 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.938687 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:27.938693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:27.938770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:27.965296 1921402 cri.go:96] found id: ""
	I1222 00:35:27.965317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.965336 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:27.965342 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:27.965410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:27.990233 1921402 cri.go:96] found id: ""
	I1222 00:35:27.990247 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.990254 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:27.990260 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:27.990316 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:28.017956 1921402 cri.go:96] found id: ""
	I1222 00:35:28.017970 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.017986 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:28.017993 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:28.018051 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:28.046152 1921402 cri.go:96] found id: ""
	I1222 00:35:28.046174 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.046182 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:28.046189 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:28.046200 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:28.114636 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:28.114656 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:28.131317 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:28.131333 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:28.201574 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:28.201584 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:28.201594 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:28.273807 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:28.273827 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:30.802833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:30.812760 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:30.812828 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:30.837592 1921402 cri.go:96] found id: ""
	I1222 00:35:30.837606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.837613 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:30.837618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:30.837678 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:30.866275 1921402 cri.go:96] found id: ""
	I1222 00:35:30.866289 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.866296 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:30.866301 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:30.866360 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:30.891264 1921402 cri.go:96] found id: ""
	I1222 00:35:30.891279 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.891286 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:30.891292 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:30.891350 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:30.915256 1921402 cri.go:96] found id: ""
	I1222 00:35:30.915270 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.915277 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:30.915288 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:30.915344 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:30.939875 1921402 cri.go:96] found id: ""
	I1222 00:35:30.939888 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.939895 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:30.939901 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:30.939959 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:30.964213 1921402 cri.go:96] found id: ""
	I1222 00:35:30.964227 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.964235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:30.964240 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:30.964298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:30.989063 1921402 cri.go:96] found id: ""
	I1222 00:35:30.989076 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.989084 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:30.989091 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:30.989104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:31.055959 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:31.055969 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:31.055980 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:31.130704 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:31.130724 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:31.175086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:31.175102 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:31.241755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:31.241774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:33.758213 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:33.768479 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:33.768543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:33.800290 1921402 cri.go:96] found id: ""
	I1222 00:35:33.800304 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.800311 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:33.800317 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:33.800374 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:33.828235 1921402 cri.go:96] found id: ""
	I1222 00:35:33.828249 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.828256 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:33.828261 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:33.828319 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:33.853431 1921402 cri.go:96] found id: ""
	I1222 00:35:33.853446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.853454 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:33.853459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:33.853517 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:33.878506 1921402 cri.go:96] found id: ""
	I1222 00:35:33.878520 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.878526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:33.878531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:33.878589 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:33.903278 1921402 cri.go:96] found id: ""
	I1222 00:35:33.903292 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.903299 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:33.903305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:33.903361 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:33.931624 1921402 cri.go:96] found id: ""
	I1222 00:35:33.931638 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.931645 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:33.931650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:33.931709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:33.960994 1921402 cri.go:96] found id: ""
	I1222 00:35:33.961013 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.961029 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:33.961042 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:33.961053 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:34.028536 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:34.028559 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:34.043699 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:34.043717 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:34.113813 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:34.113822 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:34.113832 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:34.196400 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:34.196421 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:36.727299 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:36.737834 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:36.737892 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:36.762640 1921402 cri.go:96] found id: ""
	I1222 00:35:36.762654 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.762661 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:36.762666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:36.762722 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:36.789016 1921402 cri.go:96] found id: ""
	I1222 00:35:36.789030 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.789036 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:36.789041 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:36.789097 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:36.814574 1921402 cri.go:96] found id: ""
	I1222 00:35:36.814588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.814595 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:36.814600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:36.814661 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:36.839346 1921402 cri.go:96] found id: ""
	I1222 00:35:36.839361 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.839368 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:36.839374 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:36.839430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:36.868832 1921402 cri.go:96] found id: ""
	I1222 00:35:36.868846 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.868853 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:36.868858 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:36.868930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:36.894581 1921402 cri.go:96] found id: ""
	I1222 00:35:36.894595 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.894602 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:36.894607 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:36.894664 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:36.924019 1921402 cri.go:96] found id: ""
	I1222 00:35:36.924033 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.924040 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:36.924047 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:36.924059 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:36.990902 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:36.990921 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:37.008849 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:37.008867 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:37.083336 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:37.083345 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:37.083356 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:37.173618 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:37.173638 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:39.706094 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:39.716405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:39.716478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:39.746846 1921402 cri.go:96] found id: ""
	I1222 00:35:39.746861 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.746868 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:39.746873 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:39.746933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:39.775788 1921402 cri.go:96] found id: ""
	I1222 00:35:39.775802 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.775810 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:39.775815 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:39.775872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:39.800503 1921402 cri.go:96] found id: ""
	I1222 00:35:39.800523 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.800531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:39.800536 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:39.800595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:39.829015 1921402 cri.go:96] found id: ""
	I1222 00:35:39.829028 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.829035 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:39.829040 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:39.829096 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:39.856667 1921402 cri.go:96] found id: ""
	I1222 00:35:39.856680 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.856688 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:39.856693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:39.856752 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:39.881236 1921402 cri.go:96] found id: ""
	I1222 00:35:39.881259 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.881266 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:39.881272 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:39.881365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:39.907774 1921402 cri.go:96] found id: ""
	I1222 00:35:39.907788 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.907795 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:39.907803 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:39.907814 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:39.973961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:39.973979 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:39.988619 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:39.988633 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:40.060659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:40.060671 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:40.060682 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:40.133570 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:40.133590 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:42.668577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:42.678942 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:42.678998 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:42.708217 1921402 cri.go:96] found id: ""
	I1222 00:35:42.708231 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.708237 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:42.708243 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:42.708298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:42.733470 1921402 cri.go:96] found id: ""
	I1222 00:35:42.733484 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.733491 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:42.733497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:42.733555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:42.758025 1921402 cri.go:96] found id: ""
	I1222 00:35:42.758039 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.758050 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:42.758056 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:42.758114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:42.783070 1921402 cri.go:96] found id: ""
	I1222 00:35:42.783084 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.783092 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:42.783097 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:42.783156 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:42.813432 1921402 cri.go:96] found id: ""
	I1222 00:35:42.813447 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.813454 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:42.813459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:42.813522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:42.841107 1921402 cri.go:96] found id: ""
	I1222 00:35:42.841121 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.841137 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:42.841143 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:42.841208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:42.866521 1921402 cri.go:96] found id: ""
	I1222 00:35:42.866535 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.866543 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:42.866551 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:42.866562 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:42.933074 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:42.933092 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:42.950000 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:42.950025 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:43.021528 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:43.021539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:43.021551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:43.096285 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:43.096308 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:45.625156 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:45.635680 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:45.635759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:45.665749 1921402 cri.go:96] found id: ""
	I1222 00:35:45.665763 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.665770 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:45.665775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:45.665833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:45.692343 1921402 cri.go:96] found id: ""
	I1222 00:35:45.692357 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.692364 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:45.692370 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:45.692432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:45.721093 1921402 cri.go:96] found id: ""
	I1222 00:35:45.721108 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.721115 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:45.721120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:45.721180 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:45.747092 1921402 cri.go:96] found id: ""
	I1222 00:35:45.747107 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.747114 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:45.747120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:45.747177 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:45.776676 1921402 cri.go:96] found id: ""
	I1222 00:35:45.776691 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.776698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:45.776704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:45.776760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:45.802786 1921402 cri.go:96] found id: ""
	I1222 00:35:45.802800 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.802808 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:45.802814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:45.802873 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:45.828248 1921402 cri.go:96] found id: ""
	I1222 00:35:45.828269 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.828279 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:45.828287 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:45.828299 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:45.844960 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:45.844976 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:45.915742 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:45.915756 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:45.915766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:45.994270 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:45.994295 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:46.025259 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:46.025275 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:48.596430 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:48.606663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:48.606725 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:48.635341 1921402 cri.go:96] found id: ""
	I1222 00:35:48.635356 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.635363 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:48.635369 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:48.635430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:48.661161 1921402 cri.go:96] found id: ""
	I1222 00:35:48.661184 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.661191 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:48.661196 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:48.661259 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:48.690450 1921402 cri.go:96] found id: ""
	I1222 00:35:48.690464 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.690470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:48.690476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:48.690537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:48.716347 1921402 cri.go:96] found id: ""
	I1222 00:35:48.716363 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.716388 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:48.716409 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:48.716477 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:48.746573 1921402 cri.go:96] found id: ""
	I1222 00:35:48.746588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.746595 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:48.746600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:48.746659 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:48.772755 1921402 cri.go:96] found id: ""
	I1222 00:35:48.772770 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.772777 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:48.772783 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:48.772843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:48.799586 1921402 cri.go:96] found id: ""
	I1222 00:35:48.799600 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.799607 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:48.799615 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:48.799629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:48.860067 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:48.860076 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:48.860087 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:48.930351 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:48.930371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:48.960079 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:48.960099 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:49.028235 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:49.028254 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.545623 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:51.556102 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:51.556160 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:51.584081 1921402 cri.go:96] found id: ""
	I1222 00:35:51.584095 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.584102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:51.584108 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:51.584166 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:51.608602 1921402 cri.go:96] found id: ""
	I1222 00:35:51.608616 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.608623 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:51.608628 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:51.608684 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:51.638215 1921402 cri.go:96] found id: ""
	I1222 00:35:51.638229 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.638236 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:51.638242 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:51.638300 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:51.673979 1921402 cri.go:96] found id: ""
	I1222 00:35:51.673993 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.674000 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:51.674005 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:51.674062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:51.703793 1921402 cri.go:96] found id: ""
	I1222 00:35:51.703807 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.703814 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:51.703820 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:51.703879 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:51.731106 1921402 cri.go:96] found id: ""
	I1222 00:35:51.731120 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.731128 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:51.731133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:51.731191 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:51.758738 1921402 cri.go:96] found id: ""
	I1222 00:35:51.758752 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.758759 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:51.758766 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:51.758776 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:51.788397 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:51.788413 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:51.855218 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:51.855236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.870464 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:51.870480 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:51.935854 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:51.935864 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:51.935874 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:54.506346 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:54.516494 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:54.516555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:54.542861 1921402 cri.go:96] found id: ""
	I1222 00:35:54.542877 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.542884 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:54.542889 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:54.542947 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:54.567393 1921402 cri.go:96] found id: ""
	I1222 00:35:54.567409 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.567417 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:54.567422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:54.567480 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:54.596220 1921402 cri.go:96] found id: ""
	I1222 00:35:54.596234 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.596241 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:54.596246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:54.596307 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:54.621195 1921402 cri.go:96] found id: ""
	I1222 00:35:54.621210 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.621217 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:54.621223 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:54.621286 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:54.647655 1921402 cri.go:96] found id: ""
	I1222 00:35:54.647669 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.647676 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:54.647681 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:54.647746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:54.673352 1921402 cri.go:96] found id: ""
	I1222 00:35:54.673365 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.673372 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:54.673378 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:54.673434 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:54.702345 1921402 cri.go:96] found id: ""
	I1222 00:35:54.702359 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.702366 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:54.702373 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:54.702385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:54.729725 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:54.729740 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:54.795556 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:54.795573 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:54.810399 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:54.810415 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:54.873205 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:54.873215 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:54.873236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.442564 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:57.452367 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:57.452430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:57.479752 1921402 cri.go:96] found id: ""
	I1222 00:35:57.479765 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.479773 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:57.479780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:57.479843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:57.505262 1921402 cri.go:96] found id: ""
	I1222 00:35:57.505277 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.505286 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:57.505291 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:57.505367 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:57.530750 1921402 cri.go:96] found id: ""
	I1222 00:35:57.530764 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.530771 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:57.530777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:57.530833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:57.557273 1921402 cri.go:96] found id: ""
	I1222 00:35:57.557286 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.557293 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:57.557298 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:57.557375 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:57.589790 1921402 cri.go:96] found id: ""
	I1222 00:35:57.589803 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.589810 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:57.589816 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:57.589872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:57.615560 1921402 cri.go:96] found id: ""
	I1222 00:35:57.615573 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.615580 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:57.615585 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:57.615647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:57.643950 1921402 cri.go:96] found id: ""
	I1222 00:35:57.643964 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.643971 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:57.643979 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:57.643989 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:57.710337 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:57.710357 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:57.725273 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:57.725289 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:57.792176 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:57.792186 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:57.792202 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.869558 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:57.869578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:00.405062 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:00.432605 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:00.432680 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:00.470863 1921402 cri.go:96] found id: ""
	I1222 00:36:00.470879 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.470886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:00.470892 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:00.470958 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:00.501706 1921402 cri.go:96] found id: ""
	I1222 00:36:00.501723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.501736 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:00.501743 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:00.501825 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:00.531718 1921402 cri.go:96] found id: ""
	I1222 00:36:00.531733 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.531741 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:00.531746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:00.531814 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:00.560495 1921402 cri.go:96] found id: ""
	I1222 00:36:00.560511 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.560518 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:00.560525 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:00.560585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:00.588202 1921402 cri.go:96] found id: ""
	I1222 00:36:00.588217 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.588225 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:00.588230 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:00.588293 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:00.616313 1921402 cri.go:96] found id: ""
	I1222 00:36:00.616328 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.616335 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:00.616341 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:00.616402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:00.647784 1921402 cri.go:96] found id: ""
	I1222 00:36:00.647799 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.647806 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:00.647814 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:00.647823 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:00.715096 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:00.715115 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:00.730829 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:00.730845 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:00.797617 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:00.797630 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:00.797642 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:00.866765 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:00.866784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.396597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:03.407146 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:03.407208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:03.433039 1921402 cri.go:96] found id: ""
	I1222 00:36:03.433053 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.433061 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:03.433066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:03.433141 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:03.458706 1921402 cri.go:96] found id: ""
	I1222 00:36:03.458720 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.458727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:03.458732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:03.458789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:03.484510 1921402 cri.go:96] found id: ""
	I1222 00:36:03.484524 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.484531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:03.484537 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:03.484607 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:03.510384 1921402 cri.go:96] found id: ""
	I1222 00:36:03.510399 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.510407 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:03.510413 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:03.510475 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:03.543748 1921402 cri.go:96] found id: ""
	I1222 00:36:03.543763 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.543770 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:03.543775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:03.543833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:03.571136 1921402 cri.go:96] found id: ""
	I1222 00:36:03.571150 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.571157 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:03.571162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:03.571219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:03.600085 1921402 cri.go:96] found id: ""
	I1222 00:36:03.600100 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.600107 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:03.600113 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:03.600124 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:03.664241 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:03.664252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:03.664264 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:03.738233 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:03.738260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.773144 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:03.773160 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:03.841880 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:03.841901 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.357571 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:06.368813 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:06.368874 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:06.400278 1921402 cri.go:96] found id: ""
	I1222 00:36:06.400293 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.400301 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:06.400312 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:06.400373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:06.430787 1921402 cri.go:96] found id: ""
	I1222 00:36:06.430801 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.430809 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:06.430814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:06.430872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:06.455395 1921402 cri.go:96] found id: ""
	I1222 00:36:06.455409 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.455416 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:06.455421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:06.455478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:06.485881 1921402 cri.go:96] found id: ""
	I1222 00:36:06.485895 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.485902 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:06.485908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:06.485981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:06.512824 1921402 cri.go:96] found id: ""
	I1222 00:36:06.512838 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.512845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:06.512850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:06.512918 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:06.540462 1921402 cri.go:96] found id: ""
	I1222 00:36:06.540477 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.540494 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:06.540500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:06.540572 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:06.566709 1921402 cri.go:96] found id: ""
	I1222 00:36:06.566723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.566731 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:06.566738 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:06.566750 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:06.634309 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:06.634331 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.649342 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:06.649359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:06.719108 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:06.719119 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:06.719129 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:06.791062 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:06.791080 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.321217 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:09.331643 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:09.331702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:09.370112 1921402 cri.go:96] found id: ""
	I1222 00:36:09.370139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.370146 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:09.370152 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:09.370216 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:09.400020 1921402 cri.go:96] found id: ""
	I1222 00:36:09.400041 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.400049 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:09.400055 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:09.400126 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:09.436087 1921402 cri.go:96] found id: ""
	I1222 00:36:09.436114 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.436121 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:09.436126 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:09.436194 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:09.463697 1921402 cri.go:96] found id: ""
	I1222 00:36:09.463711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.463718 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:09.463724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:09.463780 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:09.491471 1921402 cri.go:96] found id: ""
	I1222 00:36:09.491485 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.491492 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:09.491498 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:09.491555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:09.516866 1921402 cri.go:96] found id: ""
	I1222 00:36:09.516880 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.516903 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:09.516908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:09.516967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:09.545747 1921402 cri.go:96] found id: ""
	I1222 00:36:09.545761 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.545768 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:09.545777 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:09.545788 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:09.607682 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:09.607692 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:09.607702 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:09.682101 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:09.682120 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.713866 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:09.713881 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:09.781658 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:09.781677 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.297278 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:12.307636 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:12.307707 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:12.337251 1921402 cri.go:96] found id: ""
	I1222 00:36:12.337266 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.337273 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:12.337278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:12.337373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:12.371706 1921402 cri.go:96] found id: ""
	I1222 00:36:12.371719 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.371727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:12.371732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:12.371789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:12.411047 1921402 cri.go:96] found id: ""
	I1222 00:36:12.411060 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.411067 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:12.411072 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:12.411132 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:12.435536 1921402 cri.go:96] found id: ""
	I1222 00:36:12.435550 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.435557 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:12.435562 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:12.435623 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:12.461780 1921402 cri.go:96] found id: ""
	I1222 00:36:12.461794 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.461802 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:12.461808 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:12.461872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:12.487870 1921402 cri.go:96] found id: ""
	I1222 00:36:12.487884 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.487891 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:12.487897 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:12.487954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:12.513933 1921402 cri.go:96] found id: ""
	I1222 00:36:12.513947 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.513954 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:12.513961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:12.513972 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.528919 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:12.528934 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:12.596732 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:12.596742 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:12.596753 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:12.669434 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:12.669452 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:12.702088 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:12.702104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:15.269222 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:15.279437 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:15.279497 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:15.308005 1921402 cri.go:96] found id: ""
	I1222 00:36:15.308019 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.308026 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:15.308034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:15.308104 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:15.335800 1921402 cri.go:96] found id: ""
	I1222 00:36:15.335814 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.335822 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:15.335829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:15.335889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:15.364041 1921402 cri.go:96] found id: ""
	I1222 00:36:15.364055 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.364064 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:15.364070 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:15.364127 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:15.396634 1921402 cri.go:96] found id: ""
	I1222 00:36:15.396649 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.396657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:15.396663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:15.396721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:15.427551 1921402 cri.go:96] found id: ""
	I1222 00:36:15.427565 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.427572 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:15.427578 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:15.427640 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:15.462972 1921402 cri.go:96] found id: ""
	I1222 00:36:15.462987 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.462994 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:15.463000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:15.463057 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:15.488170 1921402 cri.go:96] found id: ""
	I1222 00:36:15.488184 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.488201 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:15.488208 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:15.488219 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:15.503474 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:15.503499 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:15.576848 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:15.576859 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:15.576870 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:15.647239 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:15.647258 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:15.677589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:15.677604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.247711 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:18.259251 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:18.259314 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:18.285041 1921402 cri.go:96] found id: ""
	I1222 00:36:18.285056 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.285063 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:18.285068 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:18.285129 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:18.312429 1921402 cri.go:96] found id: ""
	I1222 00:36:18.312442 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.312449 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:18.312455 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:18.312509 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:18.342758 1921402 cri.go:96] found id: ""
	I1222 00:36:18.342772 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.342779 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:18.342786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:18.342844 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:18.376484 1921402 cri.go:96] found id: ""
	I1222 00:36:18.376497 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.376505 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:18.376510 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:18.376567 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:18.411956 1921402 cri.go:96] found id: ""
	I1222 00:36:18.411970 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.411987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:18.411992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:18.412059 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:18.441521 1921402 cri.go:96] found id: ""
	I1222 00:36:18.441535 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.441542 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:18.441549 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:18.441606 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:18.466320 1921402 cri.go:96] found id: ""
	I1222 00:36:18.466335 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.466342 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:18.466349 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:18.466359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.540350 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:18.540370 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:18.555809 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:18.555826 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:18.620233 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:18.620247 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:18.620260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:18.690556 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:18.690578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.222964 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:21.233019 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:21.233077 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:21.258106 1921402 cri.go:96] found id: ""
	I1222 00:36:21.258120 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.258127 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:21.258133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:21.258190 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:21.282679 1921402 cri.go:96] found id: ""
	I1222 00:36:21.282693 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.282699 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:21.282704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:21.282761 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:21.311750 1921402 cri.go:96] found id: ""
	I1222 00:36:21.311764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.311789 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:21.311794 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:21.311861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:21.340844 1921402 cri.go:96] found id: ""
	I1222 00:36:21.340858 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.340866 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:21.340871 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:21.340940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:21.375873 1921402 cri.go:96] found id: ""
	I1222 00:36:21.375898 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.375905 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:21.375910 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:21.375973 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:21.406752 1921402 cri.go:96] found id: ""
	I1222 00:36:21.406767 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.406775 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:21.406780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:21.406845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:21.430878 1921402 cri.go:96] found id: ""
	I1222 00:36:21.430892 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.430899 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:21.430906 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:21.430925 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:21.499738 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:21.499757 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.532589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:21.532604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:21.600165 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:21.600183 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:21.615391 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:21.615407 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:21.684109 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.185197 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:24.196664 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:24.196744 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:24.223518 1921402 cri.go:96] found id: ""
	I1222 00:36:24.223533 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.223540 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:24.223546 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:24.223609 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:24.250130 1921402 cri.go:96] found id: ""
	I1222 00:36:24.250145 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.250152 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:24.250159 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:24.250218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:24.275065 1921402 cri.go:96] found id: ""
	I1222 00:36:24.275079 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.275086 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:24.275091 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:24.275147 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:24.299603 1921402 cri.go:96] found id: ""
	I1222 00:36:24.299617 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.299625 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:24.299630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:24.299688 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:24.325078 1921402 cri.go:96] found id: ""
	I1222 00:36:24.325092 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.325099 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:24.325116 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:24.325173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:24.357840 1921402 cri.go:96] found id: ""
	I1222 00:36:24.357854 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.357871 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:24.357877 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:24.357933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:24.384301 1921402 cri.go:96] found id: ""
	I1222 00:36:24.384315 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.384345 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:24.384356 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:24.384371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:24.455452 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:24.455475 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:24.470577 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:24.470593 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:24.536053 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.536310 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:24.536325 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:24.615547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:24.615572 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.151479 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:27.161793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:27.161853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:27.187096 1921402 cri.go:96] found id: ""
	I1222 00:36:27.187110 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.187117 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:27.187123 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:27.187179 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:27.212527 1921402 cri.go:96] found id: ""
	I1222 00:36:27.212541 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.212548 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:27.212553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:27.212613 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:27.241441 1921402 cri.go:96] found id: ""
	I1222 00:36:27.241455 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.241462 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:27.241467 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:27.241522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:27.266764 1921402 cri.go:96] found id: ""
	I1222 00:36:27.266778 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.266786 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:27.266791 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:27.266848 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:27.292091 1921402 cri.go:96] found id: ""
	I1222 00:36:27.292105 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.292113 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:27.292118 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:27.292175 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:27.316799 1921402 cri.go:96] found id: ""
	I1222 00:36:27.316813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.316821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:27.316826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:27.316896 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:27.341725 1921402 cri.go:96] found id: ""
	I1222 00:36:27.341739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.341746 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:27.341754 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:27.341764 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:27.363209 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:27.363227 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:27.435344 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:27.435355 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:27.435367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:27.505547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:27.505565 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.534380 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:27.534395 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.100897 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:30.112047 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:30.112110 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:30.143007 1921402 cri.go:96] found id: ""
	I1222 00:36:30.143021 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.143028 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:30.143034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:30.143091 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:30.168174 1921402 cri.go:96] found id: ""
	I1222 00:36:30.168188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.168196 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:30.168202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:30.168270 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:30.198167 1921402 cri.go:96] found id: ""
	I1222 00:36:30.198181 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.198189 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:30.198194 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:30.198254 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:30.228238 1921402 cri.go:96] found id: ""
	I1222 00:36:30.228252 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.228259 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:30.228265 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:30.228326 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:30.252893 1921402 cri.go:96] found id: ""
	I1222 00:36:30.252907 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.252914 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:30.252920 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:30.252981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:30.282045 1921402 cri.go:96] found id: ""
	I1222 00:36:30.282067 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.282074 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:30.282080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:30.282146 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:30.308008 1921402 cri.go:96] found id: ""
	I1222 00:36:30.308020 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.308027 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:30.308036 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:30.308047 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:30.380173 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:30.380183 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:30.380193 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:30.455922 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:30.455942 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:30.489214 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:30.489236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.560024 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:30.560042 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.076015 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:33.086212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:33.086271 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:33.111816 1921402 cri.go:96] found id: ""
	I1222 00:36:33.111830 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.111837 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:33.111842 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:33.111899 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:33.139689 1921402 cri.go:96] found id: ""
	I1222 00:36:33.139702 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.139709 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:33.139715 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:33.139771 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:33.164725 1921402 cri.go:96] found id: ""
	I1222 00:36:33.164739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.164746 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:33.164752 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:33.164809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:33.191640 1921402 cri.go:96] found id: ""
	I1222 00:36:33.191654 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.191661 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:33.191666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:33.191721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:33.217604 1921402 cri.go:96] found id: ""
	I1222 00:36:33.217618 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.217625 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:33.217631 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:33.217690 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:33.243174 1921402 cri.go:96] found id: ""
	I1222 00:36:33.243188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.243195 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:33.243201 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:33.243256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:33.271929 1921402 cri.go:96] found id: ""
	I1222 00:36:33.271943 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.271950 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:33.271958 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:33.271968 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:33.300147 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:33.300163 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:33.366779 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:33.366797 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.382105 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:33.382122 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:33.445242 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:33.445252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:33.445261 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:36.014121 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:36.025137 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:36.025200 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:36.051061 1921402 cri.go:96] found id: ""
	I1222 00:36:36.051077 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.051085 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:36.051090 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:36.051154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:36.082445 1921402 cri.go:96] found id: ""
	I1222 00:36:36.082458 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.082466 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:36.082471 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:36.082543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:36.108654 1921402 cri.go:96] found id: ""
	I1222 00:36:36.108668 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.108676 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:36.108682 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:36.108746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:36.133581 1921402 cri.go:96] found id: ""
	I1222 00:36:36.133595 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.133602 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:36.133608 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:36.133672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:36.158297 1921402 cri.go:96] found id: ""
	I1222 00:36:36.158311 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.158318 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:36.158324 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:36.158397 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:36.188437 1921402 cri.go:96] found id: ""
	I1222 00:36:36.188451 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.188458 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:36.188464 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:36.188521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:36.213711 1921402 cri.go:96] found id: ""
	I1222 00:36:36.213725 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.213732 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:36.213740 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:36.213752 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:36.241158 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:36.241174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:36.309133 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:36.309152 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:36.323930 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:36.323945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:36.412954 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:36.412965 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:36.412977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:38.983065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:38.993162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:38.993219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:39.019770 1921402 cri.go:96] found id: ""
	I1222 00:36:39.019784 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.019791 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:39.019797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:39.019854 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:39.044991 1921402 cri.go:96] found id: ""
	I1222 00:36:39.045005 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.045012 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:39.045017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:39.045074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:39.070008 1921402 cri.go:96] found id: ""
	I1222 00:36:39.070022 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.070029 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:39.070034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:39.070107 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:39.094755 1921402 cri.go:96] found id: ""
	I1222 00:36:39.094768 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.094776 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:39.094781 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:39.094837 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:39.120054 1921402 cri.go:96] found id: ""
	I1222 00:36:39.120068 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.120075 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:39.120080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:39.120136 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:39.144698 1921402 cri.go:96] found id: ""
	I1222 00:36:39.144711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.144719 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:39.144724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:39.144777 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:39.169125 1921402 cri.go:96] found id: ""
	I1222 00:36:39.169139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.169146 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:39.169153 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:39.169162 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:39.235404 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:39.235423 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:39.250312 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:39.250330 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:39.310727 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:39.310737 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:39.310749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:39.385727 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:39.385745 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:41.917457 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:41.927340 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:41.927400 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:41.952148 1921402 cri.go:96] found id: ""
	I1222 00:36:41.952162 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.952169 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:41.952175 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:41.952231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:41.977527 1921402 cri.go:96] found id: ""
	I1222 00:36:41.977542 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.977549 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:41.977555 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:41.977625 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:42.005444 1921402 cri.go:96] found id: ""
	I1222 00:36:42.005462 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.005470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:42.005476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:42.005551 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:42.041797 1921402 cri.go:96] found id: ""
	I1222 00:36:42.041813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.041820 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:42.041826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:42.041888 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:42.069781 1921402 cri.go:96] found id: ""
	I1222 00:36:42.069797 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.069806 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:42.069812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:42.069880 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:42.104984 1921402 cri.go:96] found id: ""
	I1222 00:36:42.105000 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.105009 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:42.105016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:42.105125 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:42.150604 1921402 cri.go:96] found id: ""
	I1222 00:36:42.150620 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.150629 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:42.150637 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:42.150649 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:42.223375 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:42.223400 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:42.242775 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:42.242802 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:42.318351 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:42.318361 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:42.318371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:42.393293 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:42.393311 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:44.928982 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:44.938908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:44.938968 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:44.963638 1921402 cri.go:96] found id: ""
	I1222 00:36:44.963652 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.963659 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:44.963665 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:44.963723 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:44.988816 1921402 cri.go:96] found id: ""
	I1222 00:36:44.988831 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.988838 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:44.988843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:44.988908 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:45.056183 1921402 cri.go:96] found id: ""
	I1222 00:36:45.056202 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.056212 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:45.056218 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:45.056311 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:45.094388 1921402 cri.go:96] found id: ""
	I1222 00:36:45.094407 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.094416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:45.094422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:45.094503 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:45.141522 1921402 cri.go:96] found id: ""
	I1222 00:36:45.141539 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.141547 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:45.141554 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:45.141632 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:45.184485 1921402 cri.go:96] found id: ""
	I1222 00:36:45.184517 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.184539 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:45.184545 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:45.184691 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:45.244764 1921402 cri.go:96] found id: ""
	I1222 00:36:45.244780 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.244789 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:45.244799 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:45.244810 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:45.323564 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:45.323584 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:45.341732 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:45.341749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:45.426921 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:45.426931 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:45.426941 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:45.496872 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:45.496897 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:48.027828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:48.038486 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:48.038549 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:48.063156 1921402 cri.go:96] found id: ""
	I1222 00:36:48.063170 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.063177 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:48.063183 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:48.063242 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:48.092751 1921402 cri.go:96] found id: ""
	I1222 00:36:48.092764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.092771 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:48.092776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:48.092833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:48.117758 1921402 cri.go:96] found id: ""
	I1222 00:36:48.117771 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.117778 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:48.117784 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:48.117845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:48.156671 1921402 cri.go:96] found id: ""
	I1222 00:36:48.156685 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.156692 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:48.156697 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:48.156759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:48.181935 1921402 cri.go:96] found id: ""
	I1222 00:36:48.181948 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.181961 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:48.181966 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:48.182025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:48.206454 1921402 cri.go:96] found id: ""
	I1222 00:36:48.206468 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.206476 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:48.206481 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:48.206537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:48.230655 1921402 cri.go:96] found id: ""
	I1222 00:36:48.230669 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.230676 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:48.230684 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:48.230694 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:48.297753 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:48.297772 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:48.312560 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:48.312575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:48.405529 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:48.405539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:48.405550 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:48.479216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:48.479235 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.006751 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:51.017983 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:51.018043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:51.045081 1921402 cri.go:96] found id: ""
	I1222 00:36:51.045095 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.045102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:51.045133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:51.045231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:51.072909 1921402 cri.go:96] found id: ""
	I1222 00:36:51.072923 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.072931 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:51.072936 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:51.073002 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:51.101558 1921402 cri.go:96] found id: ""
	I1222 00:36:51.101582 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.101590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:51.101596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:51.101667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:51.128728 1921402 cri.go:96] found id: ""
	I1222 00:36:51.128762 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.128771 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:51.128777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:51.128843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:51.155107 1921402 cri.go:96] found id: ""
	I1222 00:36:51.155123 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.155129 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:51.155135 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:51.155196 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:51.183281 1921402 cri.go:96] found id: ""
	I1222 00:36:51.183295 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.183302 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:51.183308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:51.183365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:51.209022 1921402 cri.go:96] found id: ""
	I1222 00:36:51.209050 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.209057 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:51.209078 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:51.209088 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:51.272611 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:51.272624 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:51.272634 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:51.342544 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:51.342563 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.373712 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:51.373727 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:51.452392 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:51.452419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:53.968915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:53.979167 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:53.979236 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:54.009488 1921402 cri.go:96] found id: ""
	I1222 00:36:54.009513 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.009521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:54.009527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:54.009592 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:54.036904 1921402 cri.go:96] found id: ""
	I1222 00:36:54.036919 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.036927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:54.036932 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:54.036991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:54.065561 1921402 cri.go:96] found id: ""
	I1222 00:36:54.065576 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.065583 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:54.065588 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:54.065651 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:54.092446 1921402 cri.go:96] found id: ""
	I1222 00:36:54.092472 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.092479 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:54.092485 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:54.092554 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:54.124282 1921402 cri.go:96] found id: ""
	I1222 00:36:54.124296 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.124303 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:54.124308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:54.124376 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:54.149069 1921402 cri.go:96] found id: ""
	I1222 00:36:54.149082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.149089 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:54.149095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:54.149150 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:54.173952 1921402 cri.go:96] found id: ""
	I1222 00:36:54.173966 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.173973 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:54.173985 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:54.173996 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:54.243687 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:54.243707 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:54.274613 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:54.274629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:54.347248 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:54.347266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:54.368410 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:54.368429 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:54.442889 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:56.943148 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:56.953207 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:56.953265 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:56.980068 1921402 cri.go:96] found id: ""
	I1222 00:36:56.980082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:56.980089 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:56.980095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:56.980154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:57.009964 1921402 cri.go:96] found id: ""
	I1222 00:36:57.009979 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.009987 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:57.009992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:57.010056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:57.040772 1921402 cri.go:96] found id: ""
	I1222 00:36:57.040788 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.040796 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:57.040801 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:57.040861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:57.066436 1921402 cri.go:96] found id: ""
	I1222 00:36:57.066449 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.066456 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:57.066461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:57.066518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:57.090966 1921402 cri.go:96] found id: ""
	I1222 00:36:57.090980 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.090987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:57.090992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:57.091052 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:57.120134 1921402 cri.go:96] found id: ""
	I1222 00:36:57.120149 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.120156 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:57.120162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:57.120218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:57.145014 1921402 cri.go:96] found id: ""
	I1222 00:36:57.145027 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.145034 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:57.145041 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:57.145052 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:57.214243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:57.214263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:57.248369 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:57.248387 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:57.316316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:57.316337 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:57.331296 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:57.331313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:57.423941 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:59.924191 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:59.934553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:59.934611 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:59.963952 1921402 cri.go:96] found id: ""
	I1222 00:36:59.963967 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.963974 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:59.963980 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:59.964039 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:59.989241 1921402 cri.go:96] found id: ""
	I1222 00:36:59.989254 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.989261 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:59.989267 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:59.989346 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:00.041618 1921402 cri.go:96] found id: ""
	I1222 00:37:00.041645 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.041653 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:00.041659 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:00.041743 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:00.156035 1921402 cri.go:96] found id: ""
	I1222 00:37:00.156051 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.156058 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:00.156066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:00.156135 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:00.209590 1921402 cri.go:96] found id: ""
	I1222 00:37:00.209625 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.209642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:00.209658 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:00.209760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:00.265952 1921402 cri.go:96] found id: ""
	I1222 00:37:00.265968 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.265976 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:00.265982 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:00.266053 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:00.306640 1921402 cri.go:96] found id: ""
	I1222 00:37:00.306669 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.306678 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:00.306692 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:00.306705 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:00.391041 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:00.391062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:00.408447 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:00.408463 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:00.477856 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:00.477866 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:00.477876 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:00.550139 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:00.550158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:03.081702 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:03.091652 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:03.091709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:03.116220 1921402 cri.go:96] found id: ""
	I1222 00:37:03.116234 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.116241 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:03.116247 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:03.116303 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:03.141299 1921402 cri.go:96] found id: ""
	I1222 00:37:03.141313 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.141340 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:03.141345 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:03.141402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:03.166615 1921402 cri.go:96] found id: ""
	I1222 00:37:03.166630 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.166637 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:03.166642 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:03.166706 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:03.196327 1921402 cri.go:96] found id: ""
	I1222 00:37:03.196341 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.196358 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:03.196366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:03.196432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:03.221691 1921402 cri.go:96] found id: ""
	I1222 00:37:03.221705 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.221712 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:03.221717 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:03.221774 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:03.247455 1921402 cri.go:96] found id: ""
	I1222 00:37:03.247470 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.247477 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:03.247482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:03.247540 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:03.277376 1921402 cri.go:96] found id: ""
	I1222 00:37:03.277398 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.277406 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:03.277414 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:03.277424 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:03.345137 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:03.345158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:03.362155 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:03.362174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:03.438149 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:03.438160 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:03.438172 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:03.508243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:03.508262 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.037621 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:06.048285 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:06.048348 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:06.076896 1921402 cri.go:96] found id: ""
	I1222 00:37:06.076911 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.076918 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:06.076924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:06.076985 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:06.102650 1921402 cri.go:96] found id: ""
	I1222 00:37:06.102665 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.102671 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:06.102677 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:06.102732 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:06.127851 1921402 cri.go:96] found id: ""
	I1222 00:37:06.127865 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.127872 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:06.127878 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:06.127934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:06.154598 1921402 cri.go:96] found id: ""
	I1222 00:37:06.154612 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.154619 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:06.154625 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:06.154683 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:06.181128 1921402 cri.go:96] found id: ""
	I1222 00:37:06.181142 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.181149 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:06.181155 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:06.181215 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:06.207749 1921402 cri.go:96] found id: ""
	I1222 00:37:06.207762 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.207770 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:06.207776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:06.207832 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:06.240273 1921402 cri.go:96] found id: ""
	I1222 00:37:06.240287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.240295 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:06.240302 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:06.240313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:06.310353 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:06.310372 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.337909 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:06.337928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:06.413399 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:06.413419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:06.428074 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:06.428091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:06.491546 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:08.991781 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:09.003482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:09.003561 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:09.030801 1921402 cri.go:96] found id: ""
	I1222 00:37:09.030816 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.030823 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:09.030829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:09.030884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:09.063705 1921402 cri.go:96] found id: ""
	I1222 00:37:09.063728 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.063740 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:09.063746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:09.063819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:09.094354 1921402 cri.go:96] found id: ""
	I1222 00:37:09.094370 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.094378 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:09.094384 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:09.094441 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:09.120692 1921402 cri.go:96] found id: ""
	I1222 00:37:09.120706 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.120714 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:09.120719 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:09.120776 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:09.146623 1921402 cri.go:96] found id: ""
	I1222 00:37:09.146642 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.146649 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:09.146654 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:09.146711 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:09.176276 1921402 cri.go:96] found id: ""
	I1222 00:37:09.176290 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.176297 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:09.176302 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:09.176359 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:09.205092 1921402 cri.go:96] found id: ""
	I1222 00:37:09.205106 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.205113 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:09.205120 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:09.205130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:09.271826 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:09.271851 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:09.287713 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:09.287729 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:09.372451 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:09.372461 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:09.372474 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:09.456787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:09.456808 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:11.988321 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:11.998626 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:11.998682 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:12.034498 1921402 cri.go:96] found id: ""
	I1222 00:37:12.034513 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.034521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:12.034527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:12.034588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:12.065008 1921402 cri.go:96] found id: ""
	I1222 00:37:12.065023 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.065033 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:12.065038 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:12.065098 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:12.092442 1921402 cri.go:96] found id: ""
	I1222 00:37:12.092456 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.092463 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:12.092469 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:12.092529 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:12.118395 1921402 cri.go:96] found id: ""
	I1222 00:37:12.118408 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.118416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:12.118421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:12.118476 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:12.143621 1921402 cri.go:96] found id: ""
	I1222 00:37:12.143634 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.143642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:12.143647 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:12.143702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:12.172923 1921402 cri.go:96] found id: ""
	I1222 00:37:12.172937 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.172944 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:12.172949 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:12.173004 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:12.199794 1921402 cri.go:96] found id: ""
	I1222 00:37:12.199808 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.199815 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:12.199822 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:12.199833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:12.263467 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:12.263477 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:12.263488 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:12.334334 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:12.334358 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:12.374086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:12.374104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:12.447671 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:12.447690 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:14.962865 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:14.972963 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:14.973024 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:14.998158 1921402 cri.go:96] found id: ""
	I1222 00:37:14.998172 1921402 logs.go:282] 0 containers: []
	W1222 00:37:14.998179 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:14.998185 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:14.998243 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:15.029851 1921402 cri.go:96] found id: ""
	I1222 00:37:15.029867 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.029874 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:15.029882 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:15.029948 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:15.057647 1921402 cri.go:96] found id: ""
	I1222 00:37:15.057662 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.057668 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:15.057674 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:15.057742 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:15.084111 1921402 cri.go:96] found id: ""
	I1222 00:37:15.084126 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.084134 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:15.084140 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:15.084205 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:15.114963 1921402 cri.go:96] found id: ""
	I1222 00:37:15.114978 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.114984 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:15.114990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:15.115050 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:15.140108 1921402 cri.go:96] found id: ""
	I1222 00:37:15.140122 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.140129 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:15.140134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:15.140192 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:15.169273 1921402 cri.go:96] found id: ""
	I1222 00:37:15.169287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.169294 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:15.169302 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:15.169312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:15.235722 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:15.235741 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:15.250646 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:15.250662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:15.315252 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:15.315262 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:15.315272 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:15.390764 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:15.390784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:17.922554 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:17.932679 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:17.932739 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:17.957865 1921402 cri.go:96] found id: ""
	I1222 00:37:17.957879 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.957886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:17.957891 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:17.957952 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:17.987323 1921402 cri.go:96] found id: ""
	I1222 00:37:17.987337 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.987344 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:17.987349 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:17.987410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:18.017446 1921402 cri.go:96] found id: ""
	I1222 00:37:18.017461 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.017469 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:18.017474 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:18.017538 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:18.043105 1921402 cri.go:96] found id: ""
	I1222 00:37:18.043119 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.043126 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:18.043131 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:18.043188 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:18.069214 1921402 cri.go:96] found id: ""
	I1222 00:37:18.069229 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.069236 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:18.069241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:18.069304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:18.094585 1921402 cri.go:96] found id: ""
	I1222 00:37:18.094598 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.094605 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:18.094610 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:18.094670 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:18.124271 1921402 cri.go:96] found id: ""
	I1222 00:37:18.124292 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.124300 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:18.124307 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:18.124318 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:18.191628 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:18.191645 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:18.206373 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:18.206389 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:18.271908 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:18.271918 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:18.271931 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:18.347020 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:18.347041 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:20.886577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:20.896630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:20.896687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:20.924710 1921402 cri.go:96] found id: ""
	I1222 00:37:20.924724 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.924731 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:20.924737 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:20.924797 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:20.949929 1921402 cri.go:96] found id: ""
	I1222 00:37:20.949944 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.949951 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:20.949956 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:20.950015 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:20.975241 1921402 cri.go:96] found id: ""
	I1222 00:37:20.975254 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.975262 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:20.975273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:20.975332 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:21.007134 1921402 cri.go:96] found id: ""
	I1222 00:37:21.007149 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.007157 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:21.007162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:21.007226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:21.036823 1921402 cri.go:96] found id: ""
	I1222 00:37:21.036837 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.036845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:21.036850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:21.036922 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:21.066186 1921402 cri.go:96] found id: ""
	I1222 00:37:21.066201 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.066208 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:21.066214 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:21.066273 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:21.092594 1921402 cri.go:96] found id: ""
	I1222 00:37:21.092609 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.092616 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:21.092624 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:21.092658 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:21.155743 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:21.155753 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:21.155763 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:21.230114 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:21.230133 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:21.258046 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:21.258062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:21.330592 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:21.330619 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:23.847823 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:23.857930 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:23.857993 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:23.883272 1921402 cri.go:96] found id: ""
	I1222 00:37:23.883286 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.883294 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:23.883300 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:23.883355 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:23.907502 1921402 cri.go:96] found id: ""
	I1222 00:37:23.907515 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.907523 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:23.907528 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:23.907588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:23.932684 1921402 cri.go:96] found id: ""
	I1222 00:37:23.932698 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.932705 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:23.932711 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:23.932769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:23.957636 1921402 cri.go:96] found id: ""
	I1222 00:37:23.957650 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.957657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:23.957662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:23.957719 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:23.982476 1921402 cri.go:96] found id: ""
	I1222 00:37:23.982490 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.982497 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:23.982502 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:23.982560 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:24.014243 1921402 cri.go:96] found id: ""
	I1222 00:37:24.014258 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.014265 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:24.014278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:24.014341 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:24.041726 1921402 cri.go:96] found id: ""
	I1222 00:37:24.041740 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.041748 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:24.041756 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:24.041766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:24.110293 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:24.110312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:24.126712 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:24.126736 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:24.190552 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:24.190564 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:24.190575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:24.262140 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:24.262165 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:26.797351 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:26.808692 1921402 kubeadm.go:602] duration metric: took 4m2.99406476s to restartPrimaryControlPlane
	W1222 00:37:26.808754 1921402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:37:26.808829 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:37:27.218865 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:37:27.231651 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:37:27.239390 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:37:27.239447 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:37:27.246990 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:37:27.247000 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:37:27.247052 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:37:27.254602 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:37:27.254657 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:37:27.261917 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:37:27.269756 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:37:27.269819 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:37:27.277175 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.285102 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:37:27.285164 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.292582 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:37:27.300147 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:37:27.300201 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:37:27.307155 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:37:27.346116 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:37:27.346165 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:37:27.414020 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:37:27.414089 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:37:27.414123 1921402 kubeadm.go:319] OS: Linux
	I1222 00:37:27.414168 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:37:27.414215 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:37:27.414261 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:37:27.414307 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:37:27.414354 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:37:27.414404 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:37:27.414448 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:37:27.414494 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:37:27.414538 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:37:27.481436 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:37:27.481554 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:37:27.481674 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:37:27.489751 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:37:27.493685 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:37:27.493793 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:37:27.493881 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:37:27.493967 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:37:27.494032 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:37:27.494108 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:37:27.494164 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:37:27.494230 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:37:27.494294 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:37:27.494372 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:37:27.494448 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:37:27.494487 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:37:27.494545 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:37:27.957235 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:37:28.242322 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:37:28.521612 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:37:28.976211 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:37:29.260315 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:37:29.260957 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:37:29.263905 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:37:29.266956 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:37:29.267051 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:37:29.267139 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:37:29.268296 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:37:29.283008 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:37:29.283332 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:37:29.290939 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:37:29.291397 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:37:29.291445 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:37:29.423121 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:37:29.423418 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:41:29.424266 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00111221s
	I1222 00:41:29.424291 1921402 kubeadm.go:319] 
	I1222 00:41:29.424347 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:41:29.424379 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:41:29.424483 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:41:29.424487 1921402 kubeadm.go:319] 
	I1222 00:41:29.424591 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:41:29.424622 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:41:29.424652 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:41:29.424655 1921402 kubeadm.go:319] 
	I1222 00:41:29.429048 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:41:29.429508 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:41:29.429617 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:41:29.429851 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:41:29.429855 1921402 kubeadm.go:319] 
	I1222 00:41:29.429925 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:41:29.430059 1921402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00111221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:41:29.430143 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:41:29.837125 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:41:29.849742 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:41:29.849801 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:41:29.857510 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:41:29.857520 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:41:29.857569 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:41:29.865043 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:41:29.865100 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:41:29.872214 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:41:29.879983 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:41:29.880037 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:41:29.887891 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.895656 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:41:29.895711 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.902944 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:41:29.910473 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:41:29.910531 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:41:29.917929 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:41:29.958087 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:41:29.958157 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:41:30.050415 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:41:30.050500 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:41:30.050542 1921402 kubeadm.go:319] OS: Linux
	I1222 00:41:30.050625 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:41:30.050679 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:41:30.050743 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:41:30.050790 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:41:30.050861 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:41:30.050918 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:41:30.050963 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:41:30.051010 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:41:30.051055 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:41:30.118501 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:41:30.118617 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:41:30.118712 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:41:30.125936 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:41:30.127797 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:41:30.127893 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:41:30.129701 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:41:30.129794 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:41:30.129856 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:41:30.129967 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:41:30.130025 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:41:30.130096 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:41:30.130183 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:41:30.130272 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:41:30.130344 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:41:30.130391 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:41:30.130455 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:41:30.344018 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:41:30.899259 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:41:31.157696 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:41:31.268170 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:41:31.664677 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:41:31.665563 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:41:31.668250 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:41:31.669734 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:41:31.669834 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:41:31.670177 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:41:31.671652 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:41:31.687500 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:41:31.687601 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:41:31.694901 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:41:31.695472 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:41:31.695535 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:41:31.830593 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:41:31.830722 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:45:31.830877 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001320631s
	I1222 00:45:31.831182 1921402 kubeadm.go:319] 
	I1222 00:45:31.831295 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:45:31.831352 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:45:31.831682 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:45:31.831690 1921402 kubeadm.go:319] 
	I1222 00:45:31.831881 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:45:31.831936 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:45:31.831990 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:45:31.831994 1921402 kubeadm.go:319] 
	I1222 00:45:31.836486 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:45:31.837376 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:45:31.837502 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:45:31.837782 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:45:31.837816 1921402 kubeadm.go:319] 
	I1222 00:45:31.837891 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:45:31.837949 1921402 kubeadm.go:403] duration metric: took 12m8.05957574s to StartCluster
	I1222 00:45:31.837990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:45:31.838056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:45:31.866824 1921402 cri.go:96] found id: ""
	I1222 00:45:31.866837 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.866844 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:45:31.866850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:45:31.866906 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:45:31.892772 1921402 cri.go:96] found id: ""
	I1222 00:45:31.892797 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.892805 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:45:31.892812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:45:31.892881 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:45:31.919961 1921402 cri.go:96] found id: ""
	I1222 00:45:31.919975 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.919982 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:45:31.919987 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:45:31.920043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:45:31.945683 1921402 cri.go:96] found id: ""
	I1222 00:45:31.945697 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.945704 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:45:31.945710 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:45:31.945770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:45:31.971052 1921402 cri.go:96] found id: ""
	I1222 00:45:31.971066 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.971074 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:45:31.971080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:45:31.971139 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:45:31.997143 1921402 cri.go:96] found id: ""
	I1222 00:45:31.997158 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.997165 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:45:31.997170 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:45:31.997226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:45:32.024698 1921402 cri.go:96] found id: ""
	I1222 00:45:32.024713 1921402 logs.go:282] 0 containers: []
	W1222 00:45:32.024721 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:45:32.024729 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:45:32.024739 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:45:32.092110 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:45:32.092130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:45:32.109516 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:45:32.109532 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:45:32.182932 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:45:32.182942 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:45:32.182952 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:45:32.255366 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:45:32.255386 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:45:32.288418 1921402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:45:32.288449 1921402 out.go:285] * 
	W1222 00:45:32.288511 1921402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.288527 1921402 out.go:285] * 
	W1222 00:45:32.290657 1921402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:45:32.295098 1921402 out.go:203] 
	W1222 00:45:32.297043 1921402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.297089 1921402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:45:32.297111 1921402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:45:32.299905 1921402 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.02679827Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.026861283Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.026928974Z" level=info msg="Create NRI interface"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027081987Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027098405Z" level=info msg="runtime interface created"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027113797Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027130551Z" level=info msg="runtime interface starting up..."
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027141611Z" level=info msg="starting plugins..."
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027158522Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:33:22 functional-837733 crio[9971]: time="2025-12-22T00:33:22.027257579Z" level=info msg="No systemd watchdog enabled"
	Dec 22 00:33:22 functional-837733 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.485049198Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=f8b31a28-1e5e-4924-b750-a0efe7effecd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.485769225Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=96015691-150e-4012-8e1b-65198bb7cc23 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.486269641Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=155cce42-29f5-4d02-81fe-382aac99f970 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.486755321Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=3cd990c1-1fbc-47fd-a924-3e1fddfaba39 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.487198967Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=fb5de215-5f60-4f7a-a012-72665a4d2b87 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.48764071Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=88930c08-aae3-4b3f-862d-918e893427ff name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.488127998Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=0b18c52c-a177-41d0-84f8-a99061f82c45 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.121686935Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=a65ca03c-da33-4282-bf75-37afebaaec0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122407537Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=3242b675-c84d-4560-8bb8-f6d771881c0b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122951619Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=23242d7e-419c-42f0-8bef-ee98feafd2e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123417804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5531d018-c2c9-45be-a1ce-0187a1f831ac name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123836761Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=10e6d9d2-6073-425b-bed1-1e668d53ee0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124244175Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c2632b93-d501-48e1-9830-ac88cf5004de name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124649086Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8dce6ea5-b89a-4098-8ce9-aea233a27c3e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:45:35.738032   21387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:35.738637   21387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:35.740040   21387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:35.740502   21387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:35.742061   21387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:45:35 up 1 day,  7:28,  0 user,  load average: 0.39, 0.28, 0.50
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:45:32 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:33 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 22 00:45:33 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:33 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:33 functional-837733 kubelet[21265]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:33 functional-837733 kubelet[21265]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:33 functional-837733 kubelet[21265]: E1222 00:45:33.681495   21265 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:33 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:33 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:34 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 22 00:45:34 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:34 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:34 functional-837733 kubelet[21272]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:34 functional-837733 kubelet[21272]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:34 functional-837733 kubelet[21272]: E1222 00:45:34.392192   21272 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:34 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:34 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:35 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 22 00:45:35 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:35 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:35 functional-837733 kubelet[21306]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:35 functional-837733 kubelet[21306]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:35 functional-837733 kubelet[21306]: E1222 00:45:35.153829   21306 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:35 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:35 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (352.772147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-837733 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-837733 apply -f testdata/invalidsvc.yaml: exit status 1 (61.856022ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-837733 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-837733 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-837733 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-837733 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-837733 --alsologtostderr -v=1] stderr:
I1222 00:48:00.658715 1940331 out.go:360] Setting OutFile to fd 1 ...
I1222 00:48:00.658836 1940331 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:00.658848 1940331 out.go:374] Setting ErrFile to fd 2...
I1222 00:48:00.658853 1940331 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:00.659112 1940331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:48:00.659371 1940331 mustload.go:66] Loading cluster: functional-837733
I1222 00:48:00.659794 1940331 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:00.660274 1940331 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
I1222 00:48:00.677223 1940331 host.go:66] Checking if "functional-837733" exists ...
I1222 00:48:00.677562 1940331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 00:48:00.734054 1940331 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:48:00.725281421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1222 00:48:00.734175 1940331 api_server.go:166] Checking apiserver status ...
I1222 00:48:00.734230 1940331 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1222 00:48:00.734272 1940331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
I1222 00:48:00.750716 1940331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
W1222 00:48:00.846968 1940331 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1222 00:48:00.850189 1940331 out.go:179] * The control-plane node functional-837733 apiserver is not running: (state=Stopped)
I1222 00:48:00.853202 1940331 out.go:179]   To start a cluster, run: "minikube start -p functional-837733"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (299.409071ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-837733 addons list                                                                                                                       │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ addons    │ functional-837733 addons list -o json                                                                                                               │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh       │ functional-837733 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ mount     │ -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001:/mount-9p --alsologtostderr -v=1              │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh       │ functional-837733 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh       │ functional-837733 ssh -- ls -la /mount-9p                                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh       │ functional-837733 ssh cat /mount-9p/test-1766364474200773490                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh       │ functional-837733 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh       │ functional-837733 ssh sudo umount -f /mount-9p                                                                                                      │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ mount     │ -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2852505981/001:/mount-9p --alsologtostderr -v=1 --port 33765 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh       │ functional-837733 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh       │ functional-837733 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh       │ functional-837733 ssh -- ls -la /mount-9p                                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh       │ functional-837733 ssh sudo umount -f /mount-9p                                                                                                      │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ mount     │ -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount1 --alsologtostderr -v=1                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh       │ functional-837733 ssh findmnt -T /mount1                                                                                                            │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ mount     │ -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount2 --alsologtostderr -v=1                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ mount     │ -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount3 --alsologtostderr -v=1                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh       │ functional-837733 ssh findmnt -T /mount2                                                                                                            │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh       │ functional-837733 ssh findmnt -T /mount3                                                                                                            │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ mount     │ -p functional-837733 --kill=true                                                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ start     │ -p functional-837733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ start     │ -p functional-837733 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                   │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ start     │ -p functional-837733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-837733 --alsologtostderr -v=1                                                                                      │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:48:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:48:00.432356 1940285 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:48:00.432506 1940285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:48:00.432512 1940285 out.go:374] Setting ErrFile to fd 2...
	I1222 00:48:00.432518 1940285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:48:00.433011 1940285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:48:00.433541 1940285 out.go:368] Setting JSON to false
	I1222 00:48:00.434510 1940285 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":113430,"bootTime":1766251051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:48:00.434603 1940285 start.go:143] virtualization:  
	I1222 00:48:00.438306 1940285 out.go:179] * [functional-837733] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1222 00:48:00.441464 1940285 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:48:00.441573 1940285 notify.go:221] Checking for updates...
	I1222 00:48:00.447466 1940285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:48:00.450435 1940285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:48:00.453538 1940285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:48:00.456664 1940285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:48:00.460323 1940285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:48:00.463901 1940285 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:48:00.464651 1940285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:48:00.501637 1940285 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:48:00.501792 1940285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:48:00.569175 1940285 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:48:00.558846475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:48:00.569280 1940285 docker.go:319] overlay module found
	I1222 00:48:00.572557 1940285 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1222 00:48:00.575496 1940285 start.go:309] selected driver: docker
	I1222 00:48:00.575517 1940285 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:48:00.575624 1940285 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:48:00.579072 1940285 out.go:203] 
	W1222 00:48:00.581978 1940285 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1222 00:48:00.584851 1940285 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122951619Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=23242d7e-419c-42f0-8bef-ee98feafd2e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123417804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5531d018-c2c9-45be-a1ce-0187a1f831ac name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123836761Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=10e6d9d2-6073-425b-bed1-1e668d53ee0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124244175Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c2632b93-d501-48e1-9830-ac88cf5004de name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124649086Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8dce6ea5-b89a-4098-8ce9-aea233a27c3e name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964031496Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964252675Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964296825Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964375806Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005620606Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005780347Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005818057Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040267534Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040399911Z" level=info msg="Image localhost/kicbase/echo-server:functional-837733 not found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040434683Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-837733 found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852687205Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.85283559Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852880003Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852954134Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886292818Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886432169Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886470281Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.911030791Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=369fa22a-71b8-4625-900e-18b79a9137e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.962795293Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.962862203Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=9c28ba1f-7abf-4263-860e-8a0ef6162d0e name=/runtime.v1.ImageService/ListImages
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:48:01.931157   24141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:48:01.931788   24141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:48:01.932841   24141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:48:01.933465   24141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:48:01.935100   24141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:48:01 up 1 day,  7:30,  0 user,  load average: 0.29, 0.32, 0.49
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:47:59 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:47:59 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 517.
	Dec 22 00:47:59 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:59 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:59 functional-837733 kubelet[24020]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:59 functional-837733 kubelet[24020]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:59 functional-837733 kubelet[24020]: E1222 00:47:59.903252   24020 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:47:59 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:47:59 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:48:00 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 518.
	Dec 22 00:48:00 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:48:00 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:48:00 functional-837733 kubelet[24025]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:48:00 functional-837733 kubelet[24025]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:48:00 functional-837733 kubelet[24025]: E1222 00:48:00.644992   24025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:48:00 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:48:00 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:48:01 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 519.
	Dec 22 00:48:01 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:48:01 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:48:01 functional-837733 kubelet[24054]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:48:01 functional-837733 kubelet[24054]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:48:01 functional-837733 kubelet[24054]: E1222 00:48:01.399708   24054 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:48:01 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:48:01 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (346.367013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (2.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 status: exit status 2 (326.829975ms)

                                                
                                                
-- stdout --
	functional-837733
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-837733 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (302.979155ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-837733 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 status -o json: exit status 2 (310.981129ms)

                                                
                                                
-- stdout --
	{"Name":"functional-837733","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-837733 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (295.066202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-837733 image ls                                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /usr/share/ca-certificates/1881219.pem                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image save kicbase/echo-server:functional-837733 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /etc/ssl/certs/18812192.pem                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image rm kicbase/echo-server:functional-837733 --alsologtostderr                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /usr/share/ca-certificates/18812192.pem                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image ls                                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /etc/test/nested/copy/1881219/hosts                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image ls                                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ service │ functional-837733 service list                                                                                                                            │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ image   │ functional-837733 image save --daemon kicbase/echo-server:functional-837733 --alsologtostderr                                                             │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ service │ functional-837733 service list -o json                                                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ ssh     │ functional-837733 ssh echo hello                                                                                                                          │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ service │ functional-837733 service --namespace=default --https --url hello-node                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ service │ functional-837733 service hello-node --url --format={{.IP}}                                                                                               │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ ssh     │ functional-837733 ssh cat /etc/hostname                                                                                                                   │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ tunnel  │ functional-837733 tunnel --alsologtostderr                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ service │ functional-837733 service hello-node --url                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ tunnel  │ functional-837733 tunnel --alsologtostderr                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ tunnel  │ functional-837733 tunnel --alsologtostderr                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ addons  │ functional-837733 addons list                                                                                                                             │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ addons  │ functional-837733 addons list -o json                                                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:33:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:33:19.123176 1921402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:33:19.123288 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123297 1921402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:33:19.123301 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123533 1921402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:33:19.123864 1921402 out.go:368] Setting JSON to false
	I1222 00:33:19.124702 1921402 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112549,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:33:19.124753 1921402 start.go:143] virtualization:  
	I1222 00:33:19.129939 1921402 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:33:19.132820 1921402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:33:19.132934 1921402 notify.go:221] Checking for updates...
	I1222 00:33:19.138397 1921402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:33:19.141280 1921402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:33:19.144132 1921402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:33:19.147015 1921402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:33:19.149703 1921402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:33:19.152994 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:19.153132 1921402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:33:19.180144 1921402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:33:19.180247 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.239273 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.229718138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.239382 1921402 docker.go:319] overlay module found
	I1222 00:33:19.242404 1921402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:33:19.245466 1921402 start.go:309] selected driver: docker
	I1222 00:33:19.245476 1921402 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.245601 1921402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:33:19.245707 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.303768 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.294614837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.304252 1921402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:33:19.304275 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:19.304327 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:19.304366 1921402 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.307509 1921402 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:33:19.310426 1921402 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:33:19.313438 1921402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:33:19.316296 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:19.316334 1921402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:33:19.316337 1921402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:33:19.316345 1921402 cache.go:65] Caching tarball of preloaded images
	I1222 00:33:19.316452 1921402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:33:19.316461 1921402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:33:19.316560 1921402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:33:19.335533 1921402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:33:19.335544 1921402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:33:19.335564 1921402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:33:19.335600 1921402 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:33:19.335668 1921402 start.go:364] duration metric: took 47.728µs to acquireMachinesLock for "functional-837733"
	I1222 00:33:19.335688 1921402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:33:19.335699 1921402 fix.go:54] fixHost starting: 
	I1222 00:33:19.335947 1921402 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:33:19.365010 1921402 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:33:19.365031 1921402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:33:19.368348 1921402 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:33:19.368376 1921402 machine.go:94] provisionDockerMachine start ...
	I1222 00:33:19.368449 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.391712 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.392069 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.392078 1921402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:33:19.528918 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.528932 1921402 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:33:19.529008 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.546388 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.546693 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.546705 1921402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:33:19.690177 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.690264 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.708307 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.708609 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.708623 1921402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:33:19.841808 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:33:19.841826 1921402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:33:19.841843 1921402 ubuntu.go:190] setting up certificates
	I1222 00:33:19.841860 1921402 provision.go:84] configureAuth start
	I1222 00:33:19.841918 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:19.861819 1921402 provision.go:143] copyHostCerts
	I1222 00:33:19.861874 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:33:19.861889 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:33:19.861964 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:33:19.862062 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:33:19.862066 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:33:19.862089 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:33:19.862134 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:33:19.862137 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:33:19.862158 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:33:19.862198 1921402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:33:19.952084 1921402 provision.go:177] copyRemoteCerts
	I1222 00:33:19.952138 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:33:19.952182 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.970262 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.085780 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:33:20.107727 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:33:20.128189 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:33:20.148142 1921402 provision.go:87] duration metric: took 306.259546ms to configureAuth
	I1222 00:33:20.148170 1921402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:33:20.148393 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:20.148508 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.170002 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:20.170298 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:20.170316 1921402 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:33:20.512165 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:33:20.512178 1921402 machine.go:97] duration metric: took 1.143796307s to provisionDockerMachine
	I1222 00:33:20.512211 1921402 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:33:20.512232 1921402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:33:20.512310 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:33:20.512363 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.530913 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.629793 1921402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:33:20.633223 1921402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:33:20.633240 1921402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:33:20.633250 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:33:20.633308 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:33:20.633414 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:33:20.633491 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:33:20.633538 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:33:20.641126 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:20.658985 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:33:20.677182 1921402 start.go:296] duration metric: took 164.956569ms for postStartSetup
	I1222 00:33:20.677276 1921402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:33:20.677314 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.694936 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.786654 1921402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:33:20.791424 1921402 fix.go:56] duration metric: took 1.455724646s for fixHost
	I1222 00:33:20.791440 1921402 start.go:83] releasing machines lock for "functional-837733", held for 1.455763874s
	I1222 00:33:20.791545 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:20.808426 1921402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:33:20.808469 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.808475 1921402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:33:20.808537 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.830636 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.834199 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.933012 1921402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:33:21.025436 1921402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:33:21.067278 1921402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:33:21.071776 1921402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:33:21.071868 1921402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:33:21.079967 1921402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:33:21.079982 1921402 start.go:496] detecting cgroup driver to use...
	I1222 00:33:21.080013 1921402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:33:21.080062 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:33:21.095680 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:33:21.108798 1921402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:33:21.108852 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:33:21.124535 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:33:21.138031 1921402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:33:21.254740 1921402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:33:21.374618 1921402 docker.go:234] disabling docker service ...
	I1222 00:33:21.374676 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:33:21.389413 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:33:21.402447 1921402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:33:21.520019 1921402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:33:21.643581 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:33:21.657853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:33:21.671138 1921402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:33:21.671202 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.680093 1921402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:33:21.680158 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.688698 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.697375 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.705901 1921402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:33:21.713923 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.722872 1921402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.731197 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.739610 1921402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:33:21.747277 1921402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:33:21.754578 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:21.862305 1921402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:33:22.033055 1921402 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:33:22.033138 1921402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:33:22.037243 1921402 start.go:564] Will wait 60s for crictl version
	I1222 00:33:22.037298 1921402 ssh_runner.go:195] Run: which crictl
	I1222 00:33:22.040800 1921402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:33:22.068746 1921402 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:33:22.068856 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.098980 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.132885 1921402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:33:22.135905 1921402 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:33:22.151401 1921402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:33:22.158212 1921402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:33:22.160827 1921402 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:33:22.160965 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:22.161027 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.202970 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.202982 1921402 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:33:22.203036 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.229110 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.229123 1921402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:33:22.229129 1921402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:33:22.229228 1921402 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:33:22.229311 1921402 ssh_runner.go:195] Run: crio config
	I1222 00:33:22.300174 1921402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:33:22.300213 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:22.300224 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:22.300232 1921402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:33:22.300257 1921402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:33:22.300399 1921402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:33:22.300509 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:33:22.308005 1921402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:33:22.308080 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:33:22.315446 1921402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:33:22.327549 1921402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:33:22.340301 1921402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1222 00:33:22.356214 1921402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:33:22.360143 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:22.489828 1921402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:33:23.096445 1921402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:33:23.096455 1921402 certs.go:195] generating shared ca certs ...
	I1222 00:33:23.096469 1921402 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:33:23.096601 1921402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:33:23.096676 1921402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:33:23.096682 1921402 certs.go:257] generating profile certs ...
	I1222 00:33:23.096765 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:33:23.096814 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:33:23.096847 1921402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:33:23.097000 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:33:23.097032 1921402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:33:23.097039 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:33:23.097063 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:33:23.097085 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:33:23.097109 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:33:23.097150 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:23.097934 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:33:23.122733 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:33:23.143796 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:33:23.165783 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:33:23.184349 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:33:23.201211 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:33:23.218537 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:33:23.235950 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:33:23.253260 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:33:23.270022 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:33:23.287385 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:33:23.304167 1921402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:33:23.316581 1921402 ssh_runner.go:195] Run: openssl version
	I1222 00:33:23.322651 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.329731 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:33:23.336799 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340358 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340410 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.381351 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:33:23.388858 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.396030 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:33:23.403636 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407354 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407408 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.449165 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:33:23.456643 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.463729 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:33:23.471167 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474893 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474950 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.516093 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:33:23.523732 1921402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:33:23.527686 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:33:23.568782 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:33:23.609619 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:33:23.650314 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:33:23.696111 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:33:23.737413 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:33:23.778391 1921402 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:23.778492 1921402 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:33:23.778592 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.806800 1921402 cri.go:96] found id: ""
	I1222 00:33:23.806880 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:33:23.814609 1921402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:33:23.814618 1921402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:33:23.814668 1921402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:33:23.821980 1921402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.822486 1921402 kubeconfig.go:125] found "functional-837733" server: "https://192.168.49.2:8441"
	I1222 00:33:23.823779 1921402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:33:23.831691 1921402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:18:48.647493207 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:33:22.349448241 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:33:23.831708 1921402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:33:23.831719 1921402 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1222 00:33:23.831773 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.866496 1921402 cri.go:96] found id: ""
	I1222 00:33:23.866558 1921402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:33:23.889134 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:33:23.897048 1921402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 22 00:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 00:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:22 /etc/kubernetes/scheduler.conf
	
	I1222 00:33:23.897106 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:33:23.904740 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:33:23.912683 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.912741 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:33:23.920107 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.927907 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.927961 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.935547 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:33:23.943201 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.943254 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:33:23.950967 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:33:23.959359 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:24.006017 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.352657 1921402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346615684s)
	I1222 00:33:25.352716 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.546706 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.608812 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.656813 1921402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:33:25.656915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.157087 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.657071 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.157877 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.657904 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.657224 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.157645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.657086 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.157778 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.657101 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.657140 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.157596 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.657091 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.157117 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.657826 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.157144 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.657188 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.157633 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.657483 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.157012 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.657072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.657088 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.157578 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.157973 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.657614 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.157096 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.657134 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.157828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.657830 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.157616 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.657109 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.157582 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.657114 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.657017 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.157935 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.657589 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.157909 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.657385 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.157576 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.657671 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.657099 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.157849 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.657233 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.157065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.657643 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.157064 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.657808 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.157659 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.657443 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.157202 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.657314 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.157042 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.657861 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.157451 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.657599 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.157067 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.657586 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.157353 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.656976 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.181504 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.657937 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.657874 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.157195 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.657832 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.157502 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.657928 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.158003 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.657704 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.157139 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.657745 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.157058 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.657102 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.157987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.658025 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.157337 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.656989 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.157695 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.657652 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.157707 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.657722 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.157334 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.657276 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.157900 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.657113 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.157804 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.657609 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.157350 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.657597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.157180 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.658004 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.157770 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.657721 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.157789 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.657835 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.657645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.157603 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.657417 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.157127 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.157737 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.657060 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.157985 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.657010 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.657970 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.657732 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.157680 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.657000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:25.657101 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:25.682430 1921402 cri.go:96] found id: ""
	I1222 00:34:25.682444 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.682451 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:25.682457 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:25.682521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:25.708391 1921402 cri.go:96] found id: ""
	I1222 00:34:25.708406 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.708414 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:25.708420 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:25.708495 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:25.735246 1921402 cri.go:96] found id: ""
	I1222 00:34:25.735261 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.735268 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:25.735273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:25.735330 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:25.764584 1921402 cri.go:96] found id: ""
	I1222 00:34:25.764599 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.764606 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:25.764611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:25.764672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:25.790407 1921402 cri.go:96] found id: ""
	I1222 00:34:25.790421 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.790428 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:25.790434 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:25.790492 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:25.816256 1921402 cri.go:96] found id: ""
	I1222 00:34:25.816270 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.816278 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:25.816283 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:25.816345 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:25.842813 1921402 cri.go:96] found id: ""
	I1222 00:34:25.842828 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.842835 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:25.842843 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:25.842853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:25.914526 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:25.914546 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:25.948008 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:25.948024 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:26.016574 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:26.016596 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:26.032648 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:26.032666 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:26.102628 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.604431 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:28.618535 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:28.618599 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:28.647904 1921402 cri.go:96] found id: ""
	I1222 00:34:28.647920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.647927 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:28.647933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:28.647987 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:28.674397 1921402 cri.go:96] found id: ""
	I1222 00:34:28.674411 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.674418 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:28.674423 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:28.674481 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:28.704093 1921402 cri.go:96] found id: ""
	I1222 00:34:28.704107 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.704114 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:28.704129 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:28.704187 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:28.729449 1921402 cri.go:96] found id: ""
	I1222 00:34:28.729463 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.729470 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:28.729475 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:28.729535 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:28.754277 1921402 cri.go:96] found id: ""
	I1222 00:34:28.754292 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.754300 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:28.754305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:28.754366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:28.779419 1921402 cri.go:96] found id: ""
	I1222 00:34:28.779445 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.779453 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:28.779459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:28.779526 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:28.804826 1921402 cri.go:96] found id: ""
	I1222 00:34:28.804841 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.804848 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:28.804855 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:28.804866 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:28.871702 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:28.871720 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:28.886327 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:28.886342 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:28.950915 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.950935 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:28.950945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:29.021787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:29.021807 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.549965 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:31.559848 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:31.559909 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:31.586473 1921402 cri.go:96] found id: ""
	I1222 00:34:31.586487 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.586494 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:31.586500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:31.586557 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:31.619635 1921402 cri.go:96] found id: ""
	I1222 00:34:31.619649 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.619657 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:31.619662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:31.619718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:31.649990 1921402 cri.go:96] found id: ""
	I1222 00:34:31.650004 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.650011 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:31.650017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:31.650079 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:31.676339 1921402 cri.go:96] found id: ""
	I1222 00:34:31.676354 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.676361 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:31.676366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:31.676424 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:31.704084 1921402 cri.go:96] found id: ""
	I1222 00:34:31.704098 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.704105 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:31.704110 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:31.704170 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:31.728706 1921402 cri.go:96] found id: ""
	I1222 00:34:31.728720 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.728727 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:31.728732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:31.728790 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:31.756276 1921402 cri.go:96] found id: ""
	I1222 00:34:31.756290 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.756297 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:31.756304 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:31.756314 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:31.825066 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:31.825085 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.853826 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:31.853841 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:31.920106 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:31.920125 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:31.934967 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:31.934983 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:31.996347 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.497426 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:34.507616 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:34.507676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:34.531742 1921402 cri.go:96] found id: ""
	I1222 00:34:34.531755 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.531763 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:34.531769 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:34.531843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:34.559670 1921402 cri.go:96] found id: ""
	I1222 00:34:34.559683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.559691 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:34.559695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:34.559762 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:34.587516 1921402 cri.go:96] found id: ""
	I1222 00:34:34.587531 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.587538 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:34.587543 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:34.587621 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:34.618669 1921402 cri.go:96] found id: ""
	I1222 00:34:34.618683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.618690 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:34.618695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:34.618754 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:34.655347 1921402 cri.go:96] found id: ""
	I1222 00:34:34.655361 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.655368 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:34.655373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:34.655429 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:34.680612 1921402 cri.go:96] found id: ""
	I1222 00:34:34.680626 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.680643 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:34.680648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:34.680718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:34.706049 1921402 cri.go:96] found id: ""
	I1222 00:34:34.706070 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.706078 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:34.706085 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:34.706095 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:34.774755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:34.774774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:34.789897 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:34.789917 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:34.854675 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.854686 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:34.854696 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:34.923449 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:34.923469 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.451135 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:37.461580 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:37.461643 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:37.488448 1921402 cri.go:96] found id: ""
	I1222 00:34:37.488474 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.488482 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:37.488488 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:37.488544 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:37.513499 1921402 cri.go:96] found id: ""
	I1222 00:34:37.513514 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.513526 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:37.513531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:37.513595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:37.539878 1921402 cri.go:96] found id: ""
	I1222 00:34:37.539892 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.539899 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:37.539905 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:37.539979 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:37.568838 1921402 cri.go:96] found id: ""
	I1222 00:34:37.568853 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.568860 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:37.568866 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:37.568934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:37.604110 1921402 cri.go:96] found id: ""
	I1222 00:34:37.604124 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.604131 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:37.604156 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:37.604212 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:37.639214 1921402 cri.go:96] found id: ""
	I1222 00:34:37.639228 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.639235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:37.639241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:37.639301 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:37.674749 1921402 cri.go:96] found id: ""
	I1222 00:34:37.674763 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.674771 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:37.674779 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:37.674790 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.709493 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:37.709513 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:37.781144 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:37.781164 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:37.797445 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:37.797461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:37.864668 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:37.864678 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:37.864688 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.444815 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:40.456172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:40.456239 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:40.482752 1921402 cri.go:96] found id: ""
	I1222 00:34:40.482774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.482781 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:40.482786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:40.482846 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:40.508000 1921402 cri.go:96] found id: ""
	I1222 00:34:40.508014 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.508021 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:40.508026 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:40.508087 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:40.533866 1921402 cri.go:96] found id: ""
	I1222 00:34:40.533881 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.533889 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:40.533894 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:40.533955 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:40.559781 1921402 cri.go:96] found id: ""
	I1222 00:34:40.559794 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.559802 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:40.559807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:40.559866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:40.586989 1921402 cri.go:96] found id: ""
	I1222 00:34:40.587003 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.587010 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:40.587016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:40.587074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:40.619369 1921402 cri.go:96] found id: ""
	I1222 00:34:40.619384 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.619400 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:40.619405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:40.619462 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:40.648846 1921402 cri.go:96] found id: ""
	I1222 00:34:40.648870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.648878 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:40.648886 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:40.648896 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:40.720161 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:40.720180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:40.736116 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:40.736141 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:40.808179 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:40.808193 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:40.808204 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.876486 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:40.876505 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.409595 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:43.419504 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:43.419564 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:43.444338 1921402 cri.go:96] found id: ""
	I1222 00:34:43.444360 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.444368 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:43.444373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:43.444430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:43.471938 1921402 cri.go:96] found id: ""
	I1222 00:34:43.471953 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.471960 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:43.471965 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:43.472025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:43.497576 1921402 cri.go:96] found id: ""
	I1222 00:34:43.497590 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.497597 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:43.497611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:43.497667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:43.526748 1921402 cri.go:96] found id: ""
	I1222 00:34:43.526762 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.526769 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:43.526775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:43.526838 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:43.556676 1921402 cri.go:96] found id: ""
	I1222 00:34:43.556691 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.556698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:43.556704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:43.556766 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:43.584425 1921402 cri.go:96] found id: ""
	I1222 00:34:43.584440 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.584447 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:43.584453 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:43.584514 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:43.619760 1921402 cri.go:96] found id: ""
	I1222 00:34:43.619774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.619782 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:43.619789 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:43.619813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:43.693963 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:43.693985 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.722757 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:43.722773 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:43.790316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:43.790336 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:43.804810 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:43.804829 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:43.866141 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.366416 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:46.376013 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:46.376073 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:46.402801 1921402 cri.go:96] found id: ""
	I1222 00:34:46.402815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.402822 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:46.402828 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:46.402884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:46.431904 1921402 cri.go:96] found id: ""
	I1222 00:34:46.431920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.431927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:46.431933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:46.431991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:46.456574 1921402 cri.go:96] found id: ""
	I1222 00:34:46.456588 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.456594 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:46.456600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:46.456656 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:46.481542 1921402 cri.go:96] found id: ""
	I1222 00:34:46.481556 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.481563 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:46.481569 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:46.481627 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:46.507668 1921402 cri.go:96] found id: ""
	I1222 00:34:46.507682 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.507689 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:46.507694 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:46.507756 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:46.531662 1921402 cri.go:96] found id: ""
	I1222 00:34:46.531675 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.531683 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:46.531688 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:46.531746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:46.558345 1921402 cri.go:96] found id: ""
	I1222 00:34:46.558359 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.558367 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:46.558375 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:46.558385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:46.630348 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:46.630367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:46.648136 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:46.648153 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:46.720659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.720669 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:46.720689 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:46.790237 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:46.790255 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.320987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:49.331027 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:49.331083 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:49.355873 1921402 cri.go:96] found id: ""
	I1222 00:34:49.355887 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.355903 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:49.355909 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:49.355967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:49.381127 1921402 cri.go:96] found id: ""
	I1222 00:34:49.381151 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.381158 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:49.381163 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:49.381231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:49.407961 1921402 cri.go:96] found id: ""
	I1222 00:34:49.407983 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.407990 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:49.407995 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:49.408062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:49.437856 1921402 cri.go:96] found id: ""
	I1222 00:34:49.437870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.437877 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:49.437883 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:49.437940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:49.462901 1921402 cri.go:96] found id: ""
	I1222 00:34:49.462914 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.462921 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:49.462927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:49.462981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:49.487289 1921402 cri.go:96] found id: ""
	I1222 00:34:49.487303 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.487309 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:49.487314 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:49.487372 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:49.513186 1921402 cri.go:96] found id: ""
	I1222 00:34:49.513200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.513208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:49.513215 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:49.513225 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:49.527889 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:49.527904 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:49.591170 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:49.591180 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:49.591189 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:49.670799 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:49.670817 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.698837 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:49.698853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.268261 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:52.278246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:52.278304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:52.303014 1921402 cri.go:96] found id: ""
	I1222 00:34:52.303035 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.303043 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:52.303048 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:52.303114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:52.331907 1921402 cri.go:96] found id: ""
	I1222 00:34:52.331921 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.331928 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:52.331933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:52.331989 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:52.357227 1921402 cri.go:96] found id: ""
	I1222 00:34:52.357240 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.357247 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:52.357253 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:52.357315 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:52.383730 1921402 cri.go:96] found id: ""
	I1222 00:34:52.383745 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.383752 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:52.383758 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:52.383819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:52.409605 1921402 cri.go:96] found id: ""
	I1222 00:34:52.409620 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.409636 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:52.409641 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:52.409701 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:52.438817 1921402 cri.go:96] found id: ""
	I1222 00:34:52.438831 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.438838 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:52.438843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:52.438901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:52.464983 1921402 cri.go:96] found id: ""
	I1222 00:34:52.465007 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.465016 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:52.465027 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:52.465038 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:52.533460 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:52.533470 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:52.533493 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:52.602597 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:52.602617 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:52.636250 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:52.636266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.708981 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:52.709002 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.224727 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:55.235168 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:55.235225 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:55.260911 1921402 cri.go:96] found id: ""
	I1222 00:34:55.260926 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.260933 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:55.260938 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:55.260994 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:55.286615 1921402 cri.go:96] found id: ""
	I1222 00:34:55.286634 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.286651 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:55.286657 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:55.286736 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:55.311470 1921402 cri.go:96] found id: ""
	I1222 00:34:55.311484 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.311492 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:55.311497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:55.311555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:55.336659 1921402 cri.go:96] found id: ""
	I1222 00:34:55.336673 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.336689 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:55.336695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:55.336755 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:55.368789 1921402 cri.go:96] found id: ""
	I1222 00:34:55.368815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.368823 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:55.368829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:55.368895 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:55.394360 1921402 cri.go:96] found id: ""
	I1222 00:34:55.394374 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.394381 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:55.394387 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:55.394448 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:55.419182 1921402 cri.go:96] found id: ""
	I1222 00:34:55.419196 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.419204 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:55.419211 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:55.419224 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:55.486298 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:55.486316 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.501160 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:55.501179 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:55.569075 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:55.569085 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:55.569105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:55.643182 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:55.643207 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.182663 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:58.193476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:58.193537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:58.219859 1921402 cri.go:96] found id: ""
	I1222 00:34:58.219873 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.219880 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:58.219886 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:58.219944 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:58.245686 1921402 cri.go:96] found id: ""
	I1222 00:34:58.245700 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.245707 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:58.245712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:58.245769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:58.272173 1921402 cri.go:96] found id: ""
	I1222 00:34:58.272187 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.272194 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:58.272199 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:58.272258 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:58.296505 1921402 cri.go:96] found id: ""
	I1222 00:34:58.296519 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.296526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:58.296532 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:58.296590 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:58.321897 1921402 cri.go:96] found id: ""
	I1222 00:34:58.321911 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.321919 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:58.321924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:58.321981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:58.347185 1921402 cri.go:96] found id: ""
	I1222 00:34:58.347200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.347206 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:58.347212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:58.347272 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:58.371593 1921402 cri.go:96] found id: ""
	I1222 00:34:58.371607 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.371614 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:58.371621 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:58.371631 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:58.439841 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:58.439861 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.471817 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:58.471833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:58.540043 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:58.540062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:58.556739 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:58.556755 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:58.632107 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.133446 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:01.145134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:01.145201 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:01.174836 1921402 cri.go:96] found id: ""
	I1222 00:35:01.174853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.174861 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:01.174867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:01.174943 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:01.202691 1921402 cri.go:96] found id: ""
	I1222 00:35:01.202706 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.202715 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:01.202721 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:01.202785 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:01.232555 1921402 cri.go:96] found id: ""
	I1222 00:35:01.232578 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.232586 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:01.232592 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:01.232687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:01.260828 1921402 cri.go:96] found id: ""
	I1222 00:35:01.260853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.260861 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:01.260867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:01.260954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:01.290608 1921402 cri.go:96] found id: ""
	I1222 00:35:01.290623 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.290630 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:01.290635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:01.290721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:01.321704 1921402 cri.go:96] found id: ""
	I1222 00:35:01.321719 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.321726 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:01.321732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:01.321802 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:01.349512 1921402 cri.go:96] found id: ""
	I1222 00:35:01.349526 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.349533 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:01.349540 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:01.349551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:01.385559 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:01.385575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:01.457873 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:01.457900 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:01.473910 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:01.473928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:01.540419 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.540431 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:01.540442 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.114962 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:04.126202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:04.126262 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:04.157781 1921402 cri.go:96] found id: ""
	I1222 00:35:04.157795 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.157802 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:04.157807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:04.157868 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:04.185090 1921402 cri.go:96] found id: ""
	I1222 00:35:04.185104 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.185114 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:04.185119 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:04.185189 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:04.210432 1921402 cri.go:96] found id: ""
	I1222 00:35:04.210446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.210453 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:04.210458 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:04.210518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:04.235769 1921402 cri.go:96] found id: ""
	I1222 00:35:04.235783 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.235791 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:04.235797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:04.235853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:04.260844 1921402 cri.go:96] found id: ""
	I1222 00:35:04.260858 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.260867 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:04.260872 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:04.260940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:04.286088 1921402 cri.go:96] found id: ""
	I1222 00:35:04.286103 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.286110 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:04.286115 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:04.286171 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:04.311902 1921402 cri.go:96] found id: ""
	I1222 00:35:04.311916 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.311923 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:04.311930 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:04.311945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:04.383800 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:04.383820 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:04.398944 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:04.398960 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:04.463648 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:04.463658 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:04.463668 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.533070 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:04.533091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:07.062949 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:07.072997 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:07.073058 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:07.104724 1921402 cri.go:96] found id: ""
	I1222 00:35:07.104739 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.104746 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:07.104751 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:07.104808 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:07.142683 1921402 cri.go:96] found id: ""
	I1222 00:35:07.142697 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.142704 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:07.142712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:07.142772 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:07.168569 1921402 cri.go:96] found id: ""
	I1222 00:35:07.168583 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.168590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:07.168596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:07.168654 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:07.196279 1921402 cri.go:96] found id: ""
	I1222 00:35:07.196295 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.196302 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:07.196307 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:07.196366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:07.220862 1921402 cri.go:96] found id: ""
	I1222 00:35:07.220876 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.220883 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:07.220888 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:07.220967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:07.247783 1921402 cri.go:96] found id: ""
	I1222 00:35:07.247797 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.247804 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:07.247810 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:07.247866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:07.272632 1921402 cri.go:96] found id: ""
	I1222 00:35:07.272658 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.272668 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:07.272675 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:07.272685 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:07.342006 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:07.342026 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:07.357444 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:07.357461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:07.428354 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:07.428365 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:07.428376 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:07.499031 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:07.499051 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.035662 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:10.047188 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:10.047256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:10.074669 1921402 cri.go:96] found id: ""
	I1222 00:35:10.074694 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.074702 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:10.074708 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:10.074775 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:10.111688 1921402 cri.go:96] found id: ""
	I1222 00:35:10.111702 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.111710 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:10.111716 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:10.111816 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:10.141600 1921402 cri.go:96] found id: ""
	I1222 00:35:10.141614 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.141622 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:10.141635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:10.141697 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:10.170620 1921402 cri.go:96] found id: ""
	I1222 00:35:10.170635 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.170642 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:10.170648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:10.170709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:10.196076 1921402 cri.go:96] found id: ""
	I1222 00:35:10.196090 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.196098 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:10.196111 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:10.196173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:10.222435 1921402 cri.go:96] found id: ""
	I1222 00:35:10.222449 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.222456 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:10.222461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:10.222518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:10.248319 1921402 cri.go:96] found id: ""
	I1222 00:35:10.248333 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.248350 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:10.248356 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:10.248367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:10.263245 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:10.263267 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:10.325942 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:10.325966 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:10.325977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:10.395942 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:10.395961 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.423671 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:10.423686 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:12.990380 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:13.000511 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:13.000585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:13.028285 1921402 cri.go:96] found id: ""
	I1222 00:35:13.028299 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.028306 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:13.028311 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:13.028380 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:13.055795 1921402 cri.go:96] found id: ""
	I1222 00:35:13.055809 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.055816 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:13.055821 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:13.055878 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:13.083562 1921402 cri.go:96] found id: ""
	I1222 00:35:13.083576 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.083584 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:13.083589 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:13.083647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:13.123340 1921402 cri.go:96] found id: ""
	I1222 00:35:13.123353 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.123360 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:13.123366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:13.123422 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:13.160293 1921402 cri.go:96] found id: ""
	I1222 00:35:13.160306 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.160314 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:13.160321 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:13.160393 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:13.186389 1921402 cri.go:96] found id: ""
	I1222 00:35:13.186403 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.186410 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:13.186415 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:13.186470 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:13.214187 1921402 cri.go:96] found id: ""
	I1222 00:35:13.214202 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.214208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:13.214216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:13.214228 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:13.241162 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:13.241178 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:13.306852 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:13.306871 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:13.322142 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:13.322158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:13.389384 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:13.389394 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:13.389419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:15.963833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:15.973870 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:15.973930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:15.999121 1921402 cri.go:96] found id: ""
	I1222 00:35:15.999135 1921402 logs.go:282] 0 containers: []
	W1222 00:35:15.999142 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:15.999147 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:15.999203 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:16.027303 1921402 cri.go:96] found id: ""
	I1222 00:35:16.027317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.027324 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:16.027330 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:16.027390 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:16.056726 1921402 cri.go:96] found id: ""
	I1222 00:35:16.056740 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.056747 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:16.056753 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:16.056809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:16.082920 1921402 cri.go:96] found id: ""
	I1222 00:35:16.082934 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.082941 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:16.082946 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:16.083011 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:16.120145 1921402 cri.go:96] found id: ""
	I1222 00:35:16.120160 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.120167 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:16.120172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:16.120228 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:16.149800 1921402 cri.go:96] found id: ""
	I1222 00:35:16.149814 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.149821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:16.149827 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:16.149883 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:16.175126 1921402 cri.go:96] found id: ""
	I1222 00:35:16.175140 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.175147 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:16.175155 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:16.175168 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:16.241780 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:16.241800 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:16.256196 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:16.256212 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:16.317803 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:16.317813 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:16.317824 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:16.386643 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:16.386662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:18.918855 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:18.928947 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:18.929009 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:18.954131 1921402 cri.go:96] found id: ""
	I1222 00:35:18.954151 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.954158 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:18.954169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:18.954229 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:18.978729 1921402 cri.go:96] found id: ""
	I1222 00:35:18.978780 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.978788 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:18.978793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:18.978859 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:19.006460 1921402 cri.go:96] found id: ""
	I1222 00:35:19.006474 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.006482 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:19.006487 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:19.006547 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:19.035802 1921402 cri.go:96] found id: ""
	I1222 00:35:19.035818 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.035825 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:19.035830 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:19.035889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:19.061703 1921402 cri.go:96] found id: ""
	I1222 00:35:19.061716 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.061723 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:19.061729 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:19.061789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:19.094243 1921402 cri.go:96] found id: ""
	I1222 00:35:19.094256 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.094264 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:19.094269 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:19.094324 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:19.135211 1921402 cri.go:96] found id: ""
	I1222 00:35:19.135238 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.135246 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:19.135253 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:19.135263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:19.203366 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:19.203376 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:19.203385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:19.275567 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:19.275585 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:19.303090 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:19.303105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:19.372108 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:19.372127 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:21.888736 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:21.899401 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:21.899461 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:21.924030 1921402 cri.go:96] found id: ""
	I1222 00:35:21.924045 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.924052 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:21.924057 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:21.924114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:21.950593 1921402 cri.go:96] found id: ""
	I1222 00:35:21.950606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.950613 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:21.950618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:21.950676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:21.978354 1921402 cri.go:96] found id: ""
	I1222 00:35:21.978368 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.978375 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:21.978380 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:21.978437 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:22.005709 1921402 cri.go:96] found id: ""
	I1222 00:35:22.005724 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.005734 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:22.005741 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:22.005807 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:22.033771 1921402 cri.go:96] found id: ""
	I1222 00:35:22.033785 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.033793 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:22.033799 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:22.033862 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:22.063584 1921402 cri.go:96] found id: ""
	I1222 00:35:22.063598 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.063606 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:22.063611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:22.063672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:22.090629 1921402 cri.go:96] found id: ""
	I1222 00:35:22.090643 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.090651 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:22.090658 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:22.090669 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:22.172686 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:22.172713 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:22.192440 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:22.192459 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:22.261299 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:22.261309 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:22.261339 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:22.337282 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:22.337301 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:24.868746 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:24.878846 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:24.878904 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:24.903813 1921402 cri.go:96] found id: ""
	I1222 00:35:24.903828 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.903835 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:24.903841 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:24.903901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:24.930142 1921402 cri.go:96] found id: ""
	I1222 00:35:24.930157 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.930164 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:24.930169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:24.930224 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:24.954989 1921402 cri.go:96] found id: ""
	I1222 00:35:24.955009 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.955016 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:24.955022 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:24.955082 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:24.984399 1921402 cri.go:96] found id: ""
	I1222 00:35:24.984413 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.984421 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:24.984426 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:24.984489 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:25.016621 1921402 cri.go:96] found id: ""
	I1222 00:35:25.016637 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.016644 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:25.016650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:25.016714 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:25.044807 1921402 cri.go:96] found id: ""
	I1222 00:35:25.044821 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.044830 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:25.044835 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:25.044926 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:25.071499 1921402 cri.go:96] found id: ""
	I1222 00:35:25.071514 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.071521 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:25.071530 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:25.071541 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:25.149156 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:25.149180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:25.168790 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:25.168813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:25.235601 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:25.235611 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:25.235622 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:25.306834 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:25.306854 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:27.843405 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:27.854516 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:27.854584 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:27.881946 1921402 cri.go:96] found id: ""
	I1222 00:35:27.881960 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.881967 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:27.881972 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:27.882034 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:27.911889 1921402 cri.go:96] found id: ""
	I1222 00:35:27.911903 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.911922 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:27.911927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:27.911984 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:27.938642 1921402 cri.go:96] found id: ""
	I1222 00:35:27.938656 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.938687 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:27.938693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:27.938770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:27.965296 1921402 cri.go:96] found id: ""
	I1222 00:35:27.965317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.965336 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:27.965342 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:27.965410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:27.990233 1921402 cri.go:96] found id: ""
	I1222 00:35:27.990247 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.990254 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:27.990260 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:27.990316 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:28.017956 1921402 cri.go:96] found id: ""
	I1222 00:35:28.017970 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.017986 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:28.017993 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:28.018051 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:28.046152 1921402 cri.go:96] found id: ""
	I1222 00:35:28.046174 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.046182 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:28.046189 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:28.046200 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:28.114636 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:28.114656 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:28.131317 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:28.131333 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:28.201574 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:28.201584 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:28.201594 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:28.273807 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:28.273827 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:30.802833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:30.812760 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:30.812828 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:30.837592 1921402 cri.go:96] found id: ""
	I1222 00:35:30.837606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.837613 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:30.837618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:30.837678 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:30.866275 1921402 cri.go:96] found id: ""
	I1222 00:35:30.866289 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.866296 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:30.866301 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:30.866360 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:30.891264 1921402 cri.go:96] found id: ""
	I1222 00:35:30.891279 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.891286 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:30.891292 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:30.891350 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:30.915256 1921402 cri.go:96] found id: ""
	I1222 00:35:30.915270 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.915277 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:30.915288 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:30.915344 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:30.939875 1921402 cri.go:96] found id: ""
	I1222 00:35:30.939888 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.939895 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:30.939901 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:30.939959 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:30.964213 1921402 cri.go:96] found id: ""
	I1222 00:35:30.964227 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.964235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:30.964240 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:30.964298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:30.989063 1921402 cri.go:96] found id: ""
	I1222 00:35:30.989076 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.989084 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:30.989091 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:30.989104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:31.055959 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:31.055969 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:31.055980 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:31.130704 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:31.130724 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:31.175086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:31.175102 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:31.241755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:31.241774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:33.758213 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:33.768479 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:33.768543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:33.800290 1921402 cri.go:96] found id: ""
	I1222 00:35:33.800304 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.800311 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:33.800317 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:33.800374 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:33.828235 1921402 cri.go:96] found id: ""
	I1222 00:35:33.828249 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.828256 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:33.828261 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:33.828319 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:33.853431 1921402 cri.go:96] found id: ""
	I1222 00:35:33.853446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.853454 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:33.853459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:33.853517 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:33.878506 1921402 cri.go:96] found id: ""
	I1222 00:35:33.878520 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.878526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:33.878531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:33.878589 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:33.903278 1921402 cri.go:96] found id: ""
	I1222 00:35:33.903292 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.903299 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:33.903305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:33.903361 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:33.931624 1921402 cri.go:96] found id: ""
	I1222 00:35:33.931638 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.931645 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:33.931650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:33.931709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:33.960994 1921402 cri.go:96] found id: ""
	I1222 00:35:33.961013 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.961029 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:33.961042 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:33.961053 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:34.028536 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:34.028559 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:34.043699 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:34.043717 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:34.113813 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:34.113822 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:34.113832 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:34.196400 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:34.196421 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:36.727299 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:36.737834 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:36.737892 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:36.762640 1921402 cri.go:96] found id: ""
	I1222 00:35:36.762654 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.762661 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:36.762666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:36.762722 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:36.789016 1921402 cri.go:96] found id: ""
	I1222 00:35:36.789030 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.789036 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:36.789041 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:36.789097 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:36.814574 1921402 cri.go:96] found id: ""
	I1222 00:35:36.814588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.814595 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:36.814600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:36.814661 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:36.839346 1921402 cri.go:96] found id: ""
	I1222 00:35:36.839361 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.839368 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:36.839374 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:36.839430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:36.868832 1921402 cri.go:96] found id: ""
	I1222 00:35:36.868846 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.868853 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:36.868858 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:36.868930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:36.894581 1921402 cri.go:96] found id: ""
	I1222 00:35:36.894595 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.894602 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:36.894607 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:36.894664 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:36.924019 1921402 cri.go:96] found id: ""
	I1222 00:35:36.924033 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.924040 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:36.924047 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:36.924059 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:36.990902 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:36.990921 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:37.008849 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:37.008867 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:37.083336 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:37.083345 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:37.083356 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:37.173618 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:37.173638 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:39.706094 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:39.716405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:39.716478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:39.746846 1921402 cri.go:96] found id: ""
	I1222 00:35:39.746861 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.746868 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:39.746873 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:39.746933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:39.775788 1921402 cri.go:96] found id: ""
	I1222 00:35:39.775802 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.775810 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:39.775815 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:39.775872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:39.800503 1921402 cri.go:96] found id: ""
	I1222 00:35:39.800523 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.800531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:39.800536 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:39.800595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:39.829015 1921402 cri.go:96] found id: ""
	I1222 00:35:39.829028 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.829035 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:39.829040 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:39.829096 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:39.856667 1921402 cri.go:96] found id: ""
	I1222 00:35:39.856680 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.856688 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:39.856693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:39.856752 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:39.881236 1921402 cri.go:96] found id: ""
	I1222 00:35:39.881259 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.881266 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:39.881272 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:39.881365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:39.907774 1921402 cri.go:96] found id: ""
	I1222 00:35:39.907788 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.907795 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:39.907803 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:39.907814 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:39.973961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:39.973979 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:39.988619 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:39.988633 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:40.060659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:40.060671 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:40.060682 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:40.133570 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:40.133590 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:42.668577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:42.678942 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:42.678998 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:42.708217 1921402 cri.go:96] found id: ""
	I1222 00:35:42.708231 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.708237 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:42.708243 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:42.708298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:42.733470 1921402 cri.go:96] found id: ""
	I1222 00:35:42.733484 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.733491 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:42.733497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:42.733555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:42.758025 1921402 cri.go:96] found id: ""
	I1222 00:35:42.758039 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.758050 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:42.758056 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:42.758114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:42.783070 1921402 cri.go:96] found id: ""
	I1222 00:35:42.783084 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.783092 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:42.783097 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:42.783156 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:42.813432 1921402 cri.go:96] found id: ""
	I1222 00:35:42.813447 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.813454 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:42.813459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:42.813522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:42.841107 1921402 cri.go:96] found id: ""
	I1222 00:35:42.841121 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.841137 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:42.841143 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:42.841208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:42.866521 1921402 cri.go:96] found id: ""
	I1222 00:35:42.866535 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.866543 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:42.866551 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:42.866562 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:42.933074 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:42.933092 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:42.950000 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:42.950025 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:43.021528 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:43.021539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:43.021551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:43.096285 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:43.096308 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:45.625156 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:45.635680 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:45.635759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:45.665749 1921402 cri.go:96] found id: ""
	I1222 00:35:45.665763 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.665770 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:45.665775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:45.665833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:45.692343 1921402 cri.go:96] found id: ""
	I1222 00:35:45.692357 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.692364 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:45.692370 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:45.692432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:45.721093 1921402 cri.go:96] found id: ""
	I1222 00:35:45.721108 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.721115 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:45.721120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:45.721180 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:45.747092 1921402 cri.go:96] found id: ""
	I1222 00:35:45.747107 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.747114 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:45.747120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:45.747177 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:45.776676 1921402 cri.go:96] found id: ""
	I1222 00:35:45.776691 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.776698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:45.776704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:45.776760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:45.802786 1921402 cri.go:96] found id: ""
	I1222 00:35:45.802800 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.802808 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:45.802814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:45.802873 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:45.828248 1921402 cri.go:96] found id: ""
	I1222 00:35:45.828269 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.828279 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:45.828287 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:45.828299 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:45.844960 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:45.844976 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:45.915742 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:45.915756 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:45.915766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:45.994270 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:45.994295 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:46.025259 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:46.025275 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:48.596430 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:48.606663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:48.606725 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:48.635341 1921402 cri.go:96] found id: ""
	I1222 00:35:48.635356 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.635363 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:48.635369 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:48.635430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:48.661161 1921402 cri.go:96] found id: ""
	I1222 00:35:48.661184 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.661191 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:48.661196 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:48.661259 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:48.690450 1921402 cri.go:96] found id: ""
	I1222 00:35:48.690464 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.690470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:48.690476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:48.690537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:48.716347 1921402 cri.go:96] found id: ""
	I1222 00:35:48.716363 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.716388 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:48.716409 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:48.716477 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:48.746573 1921402 cri.go:96] found id: ""
	I1222 00:35:48.746588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.746595 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:48.746600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:48.746659 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:48.772755 1921402 cri.go:96] found id: ""
	I1222 00:35:48.772770 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.772777 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:48.772783 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:48.772843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:48.799586 1921402 cri.go:96] found id: ""
	I1222 00:35:48.799600 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.799607 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:48.799615 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:48.799629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:48.860067 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:48.860076 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:48.860087 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:48.930351 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:48.930371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:48.960079 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:48.960099 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:49.028235 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:49.028254 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.545623 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:51.556102 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:51.556160 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:51.584081 1921402 cri.go:96] found id: ""
	I1222 00:35:51.584095 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.584102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:51.584108 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:51.584166 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:51.608602 1921402 cri.go:96] found id: ""
	I1222 00:35:51.608616 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.608623 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:51.608628 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:51.608684 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:51.638215 1921402 cri.go:96] found id: ""
	I1222 00:35:51.638229 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.638236 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:51.638242 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:51.638300 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:51.673979 1921402 cri.go:96] found id: ""
	I1222 00:35:51.673993 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.674000 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:51.674005 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:51.674062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:51.703793 1921402 cri.go:96] found id: ""
	I1222 00:35:51.703807 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.703814 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:51.703820 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:51.703879 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:51.731106 1921402 cri.go:96] found id: ""
	I1222 00:35:51.731120 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.731128 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:51.731133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:51.731191 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:51.758738 1921402 cri.go:96] found id: ""
	I1222 00:35:51.758752 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.758759 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:51.758766 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:51.758776 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:51.788397 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:51.788413 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:51.855218 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:51.855236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.870464 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:51.870480 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:51.935854 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:51.935864 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:51.935874 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:54.506346 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:54.516494 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:54.516555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:54.542861 1921402 cri.go:96] found id: ""
	I1222 00:35:54.542877 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.542884 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:54.542889 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:54.542947 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:54.567393 1921402 cri.go:96] found id: ""
	I1222 00:35:54.567409 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.567417 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:54.567422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:54.567480 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:54.596220 1921402 cri.go:96] found id: ""
	I1222 00:35:54.596234 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.596241 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:54.596246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:54.596307 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:54.621195 1921402 cri.go:96] found id: ""
	I1222 00:35:54.621210 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.621217 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:54.621223 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:54.621286 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:54.647655 1921402 cri.go:96] found id: ""
	I1222 00:35:54.647669 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.647676 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:54.647681 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:54.647746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:54.673352 1921402 cri.go:96] found id: ""
	I1222 00:35:54.673365 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.673372 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:54.673378 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:54.673434 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:54.702345 1921402 cri.go:96] found id: ""
	I1222 00:35:54.702359 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.702366 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:54.702373 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:54.702385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:54.729725 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:54.729740 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:54.795556 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:54.795573 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:54.810399 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:54.810415 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:54.873205 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:54.873215 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:54.873236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.442564 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:57.452367 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:57.452430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:57.479752 1921402 cri.go:96] found id: ""
	I1222 00:35:57.479765 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.479773 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:57.479780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:57.479843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:57.505262 1921402 cri.go:96] found id: ""
	I1222 00:35:57.505277 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.505286 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:57.505291 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:57.505367 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:57.530750 1921402 cri.go:96] found id: ""
	I1222 00:35:57.530764 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.530771 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:57.530777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:57.530833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:57.557273 1921402 cri.go:96] found id: ""
	I1222 00:35:57.557286 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.557293 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:57.557298 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:57.557375 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:57.589790 1921402 cri.go:96] found id: ""
	I1222 00:35:57.589803 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.589810 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:57.589816 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:57.589872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:57.615560 1921402 cri.go:96] found id: ""
	I1222 00:35:57.615573 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.615580 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:57.615585 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:57.615647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:57.643950 1921402 cri.go:96] found id: ""
	I1222 00:35:57.643964 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.643971 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:57.643979 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:57.643989 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:57.710337 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:57.710357 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:57.725273 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:57.725289 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:57.792176 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:57.792186 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:57.792202 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.869558 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:57.869578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:00.405062 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:00.432605 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:00.432680 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:00.470863 1921402 cri.go:96] found id: ""
	I1222 00:36:00.470879 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.470886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:00.470892 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:00.470958 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:00.501706 1921402 cri.go:96] found id: ""
	I1222 00:36:00.501723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.501736 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:00.501743 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:00.501825 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:00.531718 1921402 cri.go:96] found id: ""
	I1222 00:36:00.531733 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.531741 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:00.531746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:00.531814 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:00.560495 1921402 cri.go:96] found id: ""
	I1222 00:36:00.560511 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.560518 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:00.560525 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:00.560585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:00.588202 1921402 cri.go:96] found id: ""
	I1222 00:36:00.588217 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.588225 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:00.588230 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:00.588293 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:00.616313 1921402 cri.go:96] found id: ""
	I1222 00:36:00.616328 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.616335 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:00.616341 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:00.616402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:00.647784 1921402 cri.go:96] found id: ""
	I1222 00:36:00.647799 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.647806 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:00.647814 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:00.647823 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:00.715096 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:00.715115 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:00.730829 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:00.730845 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:00.797617 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:00.797630 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:00.797642 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:00.866765 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:00.866784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.396597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:03.407146 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:03.407208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:03.433039 1921402 cri.go:96] found id: ""
	I1222 00:36:03.433053 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.433061 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:03.433066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:03.433141 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:03.458706 1921402 cri.go:96] found id: ""
	I1222 00:36:03.458720 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.458727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:03.458732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:03.458789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:03.484510 1921402 cri.go:96] found id: ""
	I1222 00:36:03.484524 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.484531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:03.484537 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:03.484607 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:03.510384 1921402 cri.go:96] found id: ""
	I1222 00:36:03.510399 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.510407 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:03.510413 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:03.510475 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:03.543748 1921402 cri.go:96] found id: ""
	I1222 00:36:03.543763 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.543770 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:03.543775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:03.543833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:03.571136 1921402 cri.go:96] found id: ""
	I1222 00:36:03.571150 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.571157 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:03.571162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:03.571219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:03.600085 1921402 cri.go:96] found id: ""
	I1222 00:36:03.600100 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.600107 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:03.600113 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:03.600124 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:03.664241 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:03.664252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:03.664264 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:03.738233 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:03.738260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.773144 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:03.773160 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:03.841880 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:03.841901 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.357571 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:06.368813 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:06.368874 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:06.400278 1921402 cri.go:96] found id: ""
	I1222 00:36:06.400293 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.400301 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:06.400312 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:06.400373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:06.430787 1921402 cri.go:96] found id: ""
	I1222 00:36:06.430801 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.430809 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:06.430814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:06.430872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:06.455395 1921402 cri.go:96] found id: ""
	I1222 00:36:06.455409 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.455416 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:06.455421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:06.455478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:06.485881 1921402 cri.go:96] found id: ""
	I1222 00:36:06.485895 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.485902 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:06.485908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:06.485981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:06.512824 1921402 cri.go:96] found id: ""
	I1222 00:36:06.512838 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.512845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:06.512850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:06.512918 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:06.540462 1921402 cri.go:96] found id: ""
	I1222 00:36:06.540477 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.540494 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:06.540500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:06.540572 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:06.566709 1921402 cri.go:96] found id: ""
	I1222 00:36:06.566723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.566731 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:06.566738 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:06.566750 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:06.634309 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:06.634331 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.649342 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:06.649359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:06.719108 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:06.719119 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:06.719129 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:06.791062 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:06.791080 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.321217 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:09.331643 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:09.331702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:09.370112 1921402 cri.go:96] found id: ""
	I1222 00:36:09.370139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.370146 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:09.370152 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:09.370216 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:09.400020 1921402 cri.go:96] found id: ""
	I1222 00:36:09.400041 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.400049 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:09.400055 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:09.400126 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:09.436087 1921402 cri.go:96] found id: ""
	I1222 00:36:09.436114 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.436121 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:09.436126 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:09.436194 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:09.463697 1921402 cri.go:96] found id: ""
	I1222 00:36:09.463711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.463718 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:09.463724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:09.463780 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:09.491471 1921402 cri.go:96] found id: ""
	I1222 00:36:09.491485 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.491492 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:09.491498 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:09.491555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:09.516866 1921402 cri.go:96] found id: ""
	I1222 00:36:09.516880 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.516903 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:09.516908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:09.516967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:09.545747 1921402 cri.go:96] found id: ""
	I1222 00:36:09.545761 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.545768 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:09.545777 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:09.545788 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:09.607682 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:09.607692 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:09.607702 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:09.682101 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:09.682120 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.713866 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:09.713881 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:09.781658 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:09.781677 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.297278 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:12.307636 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:12.307707 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:12.337251 1921402 cri.go:96] found id: ""
	I1222 00:36:12.337266 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.337273 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:12.337278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:12.337373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:12.371706 1921402 cri.go:96] found id: ""
	I1222 00:36:12.371719 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.371727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:12.371732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:12.371789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:12.411047 1921402 cri.go:96] found id: ""
	I1222 00:36:12.411060 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.411067 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:12.411072 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:12.411132 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:12.435536 1921402 cri.go:96] found id: ""
	I1222 00:36:12.435550 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.435557 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:12.435562 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:12.435623 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:12.461780 1921402 cri.go:96] found id: ""
	I1222 00:36:12.461794 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.461802 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:12.461808 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:12.461872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:12.487870 1921402 cri.go:96] found id: ""
	I1222 00:36:12.487884 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.487891 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:12.487897 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:12.487954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:12.513933 1921402 cri.go:96] found id: ""
	I1222 00:36:12.513947 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.513954 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:12.513961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:12.513972 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.528919 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:12.528934 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:12.596732 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:12.596742 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:12.596753 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:12.669434 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:12.669452 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:12.702088 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:12.702104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:15.269222 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:15.279437 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:15.279497 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:15.308005 1921402 cri.go:96] found id: ""
	I1222 00:36:15.308019 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.308026 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:15.308034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:15.308104 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:15.335800 1921402 cri.go:96] found id: ""
	I1222 00:36:15.335814 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.335822 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:15.335829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:15.335889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:15.364041 1921402 cri.go:96] found id: ""
	I1222 00:36:15.364055 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.364064 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:15.364070 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:15.364127 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:15.396634 1921402 cri.go:96] found id: ""
	I1222 00:36:15.396649 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.396657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:15.396663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:15.396721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:15.427551 1921402 cri.go:96] found id: ""
	I1222 00:36:15.427565 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.427572 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:15.427578 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:15.427640 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:15.462972 1921402 cri.go:96] found id: ""
	I1222 00:36:15.462987 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.462994 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:15.463000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:15.463057 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:15.488170 1921402 cri.go:96] found id: ""
	I1222 00:36:15.488184 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.488201 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:15.488208 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:15.488219 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:15.503474 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:15.503499 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:15.576848 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:15.576859 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:15.576870 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:15.647239 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:15.647258 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:15.677589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:15.677604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.247711 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:18.259251 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:18.259314 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:18.285041 1921402 cri.go:96] found id: ""
	I1222 00:36:18.285056 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.285063 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:18.285068 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:18.285129 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:18.312429 1921402 cri.go:96] found id: ""
	I1222 00:36:18.312442 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.312449 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:18.312455 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:18.312509 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:18.342758 1921402 cri.go:96] found id: ""
	I1222 00:36:18.342772 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.342779 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:18.342786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:18.342844 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:18.376484 1921402 cri.go:96] found id: ""
	I1222 00:36:18.376497 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.376505 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:18.376510 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:18.376567 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:18.411956 1921402 cri.go:96] found id: ""
	I1222 00:36:18.411970 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.411987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:18.411992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:18.412059 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:18.441521 1921402 cri.go:96] found id: ""
	I1222 00:36:18.441535 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.441542 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:18.441549 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:18.441606 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:18.466320 1921402 cri.go:96] found id: ""
	I1222 00:36:18.466335 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.466342 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:18.466349 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:18.466359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.540350 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:18.540370 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:18.555809 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:18.555826 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:18.620233 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:18.620247 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:18.620260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:18.690556 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:18.690578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.222964 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:21.233019 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:21.233077 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:21.258106 1921402 cri.go:96] found id: ""
	I1222 00:36:21.258120 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.258127 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:21.258133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:21.258190 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:21.282679 1921402 cri.go:96] found id: ""
	I1222 00:36:21.282693 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.282699 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:21.282704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:21.282761 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:21.311750 1921402 cri.go:96] found id: ""
	I1222 00:36:21.311764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.311789 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:21.311794 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:21.311861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:21.340844 1921402 cri.go:96] found id: ""
	I1222 00:36:21.340858 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.340866 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:21.340871 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:21.340940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:21.375873 1921402 cri.go:96] found id: ""
	I1222 00:36:21.375898 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.375905 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:21.375910 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:21.375973 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:21.406752 1921402 cri.go:96] found id: ""
	I1222 00:36:21.406767 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.406775 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:21.406780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:21.406845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:21.430878 1921402 cri.go:96] found id: ""
	I1222 00:36:21.430892 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.430899 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:21.430906 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:21.430925 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:21.499738 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:21.499757 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.532589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:21.532604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:21.600165 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:21.600183 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:21.615391 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:21.615407 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:21.684109 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.185197 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:24.196664 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:24.196744 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:24.223518 1921402 cri.go:96] found id: ""
	I1222 00:36:24.223533 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.223540 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:24.223546 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:24.223609 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:24.250130 1921402 cri.go:96] found id: ""
	I1222 00:36:24.250145 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.250152 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:24.250159 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:24.250218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:24.275065 1921402 cri.go:96] found id: ""
	I1222 00:36:24.275079 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.275086 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:24.275091 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:24.275147 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:24.299603 1921402 cri.go:96] found id: ""
	I1222 00:36:24.299617 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.299625 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:24.299630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:24.299688 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:24.325078 1921402 cri.go:96] found id: ""
	I1222 00:36:24.325092 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.325099 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:24.325116 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:24.325173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:24.357840 1921402 cri.go:96] found id: ""
	I1222 00:36:24.357854 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.357871 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:24.357877 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:24.357933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:24.384301 1921402 cri.go:96] found id: ""
	I1222 00:36:24.384315 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.384345 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:24.384356 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:24.384371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:24.455452 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:24.455475 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:24.470577 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:24.470593 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:24.536053 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.536310 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:24.536325 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:24.615547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:24.615572 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.151479 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:27.161793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:27.161853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:27.187096 1921402 cri.go:96] found id: ""
	I1222 00:36:27.187110 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.187117 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:27.187123 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:27.187179 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:27.212527 1921402 cri.go:96] found id: ""
	I1222 00:36:27.212541 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.212548 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:27.212553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:27.212613 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:27.241441 1921402 cri.go:96] found id: ""
	I1222 00:36:27.241455 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.241462 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:27.241467 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:27.241522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:27.266764 1921402 cri.go:96] found id: ""
	I1222 00:36:27.266778 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.266786 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:27.266791 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:27.266848 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:27.292091 1921402 cri.go:96] found id: ""
	I1222 00:36:27.292105 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.292113 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:27.292118 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:27.292175 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:27.316799 1921402 cri.go:96] found id: ""
	I1222 00:36:27.316813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.316821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:27.316826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:27.316896 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:27.341725 1921402 cri.go:96] found id: ""
	I1222 00:36:27.341739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.341746 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:27.341754 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:27.341764 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:27.363209 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:27.363227 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:27.435344 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:27.435355 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:27.435367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:27.505547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:27.505565 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.534380 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:27.534395 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.100897 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:30.112047 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:30.112110 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:30.143007 1921402 cri.go:96] found id: ""
	I1222 00:36:30.143021 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.143028 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:30.143034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:30.143091 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:30.168174 1921402 cri.go:96] found id: ""
	I1222 00:36:30.168188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.168196 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:30.168202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:30.168270 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:30.198167 1921402 cri.go:96] found id: ""
	I1222 00:36:30.198181 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.198189 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:30.198194 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:30.198254 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:30.228238 1921402 cri.go:96] found id: ""
	I1222 00:36:30.228252 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.228259 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:30.228265 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:30.228326 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:30.252893 1921402 cri.go:96] found id: ""
	I1222 00:36:30.252907 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.252914 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:30.252920 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:30.252981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:30.282045 1921402 cri.go:96] found id: ""
	I1222 00:36:30.282067 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.282074 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:30.282080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:30.282146 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:30.308008 1921402 cri.go:96] found id: ""
	I1222 00:36:30.308020 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.308027 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:30.308036 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:30.308047 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:30.380173 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:30.380183 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:30.380193 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:30.455922 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:30.455942 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:30.489214 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:30.489236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.560024 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:30.560042 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.076015 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:33.086212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:33.086271 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:33.111816 1921402 cri.go:96] found id: ""
	I1222 00:36:33.111830 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.111837 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:33.111842 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:33.111899 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:33.139689 1921402 cri.go:96] found id: ""
	I1222 00:36:33.139702 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.139709 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:33.139715 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:33.139771 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:33.164725 1921402 cri.go:96] found id: ""
	I1222 00:36:33.164739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.164746 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:33.164752 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:33.164809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:33.191640 1921402 cri.go:96] found id: ""
	I1222 00:36:33.191654 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.191661 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:33.191666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:33.191721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:33.217604 1921402 cri.go:96] found id: ""
	I1222 00:36:33.217618 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.217625 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:33.217631 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:33.217690 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:33.243174 1921402 cri.go:96] found id: ""
	I1222 00:36:33.243188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.243195 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:33.243201 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:33.243256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:33.271929 1921402 cri.go:96] found id: ""
	I1222 00:36:33.271943 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.271950 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:33.271958 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:33.271968 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:33.300147 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:33.300163 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:33.366779 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:33.366797 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.382105 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:33.382122 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:33.445242 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:33.445252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:33.445261 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:36.014121 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:36.025137 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:36.025200 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:36.051061 1921402 cri.go:96] found id: ""
	I1222 00:36:36.051077 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.051085 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:36.051090 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:36.051154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:36.082445 1921402 cri.go:96] found id: ""
	I1222 00:36:36.082458 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.082466 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:36.082471 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:36.082543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:36.108654 1921402 cri.go:96] found id: ""
	I1222 00:36:36.108668 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.108676 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:36.108682 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:36.108746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:36.133581 1921402 cri.go:96] found id: ""
	I1222 00:36:36.133595 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.133602 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:36.133608 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:36.133672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:36.158297 1921402 cri.go:96] found id: ""
	I1222 00:36:36.158311 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.158318 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:36.158324 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:36.158397 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:36.188437 1921402 cri.go:96] found id: ""
	I1222 00:36:36.188451 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.188458 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:36.188464 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:36.188521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:36.213711 1921402 cri.go:96] found id: ""
	I1222 00:36:36.213725 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.213732 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:36.213740 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:36.213752 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:36.241158 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:36.241174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:36.309133 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:36.309152 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:36.323930 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:36.323945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:36.412954 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:36.412965 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:36.412977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:38.983065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:38.993162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:38.993219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:39.019770 1921402 cri.go:96] found id: ""
	I1222 00:36:39.019784 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.019791 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:39.019797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:39.019854 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:39.044991 1921402 cri.go:96] found id: ""
	I1222 00:36:39.045005 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.045012 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:39.045017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:39.045074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:39.070008 1921402 cri.go:96] found id: ""
	I1222 00:36:39.070022 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.070029 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:39.070034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:39.070107 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:39.094755 1921402 cri.go:96] found id: ""
	I1222 00:36:39.094768 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.094776 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:39.094781 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:39.094837 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:39.120054 1921402 cri.go:96] found id: ""
	I1222 00:36:39.120068 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.120075 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:39.120080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:39.120136 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:39.144698 1921402 cri.go:96] found id: ""
	I1222 00:36:39.144711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.144719 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:39.144724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:39.144777 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:39.169125 1921402 cri.go:96] found id: ""
	I1222 00:36:39.169139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.169146 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:39.169153 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:39.169162 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:39.235404 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:39.235423 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:39.250312 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:39.250330 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:39.310727 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:39.310737 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:39.310749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:39.385727 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:39.385745 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:41.917457 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:41.927340 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:41.927400 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:41.952148 1921402 cri.go:96] found id: ""
	I1222 00:36:41.952162 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.952169 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:41.952175 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:41.952231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:41.977527 1921402 cri.go:96] found id: ""
	I1222 00:36:41.977542 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.977549 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:41.977555 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:41.977625 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:42.005444 1921402 cri.go:96] found id: ""
	I1222 00:36:42.005462 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.005470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:42.005476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:42.005551 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:42.041797 1921402 cri.go:96] found id: ""
	I1222 00:36:42.041813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.041820 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:42.041826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:42.041888 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:42.069781 1921402 cri.go:96] found id: ""
	I1222 00:36:42.069797 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.069806 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:42.069812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:42.069880 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:42.104984 1921402 cri.go:96] found id: ""
	I1222 00:36:42.105000 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.105009 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:42.105016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:42.105125 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:42.150604 1921402 cri.go:96] found id: ""
	I1222 00:36:42.150620 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.150629 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:42.150637 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:42.150649 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:42.223375 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:42.223400 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:42.242775 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:42.242802 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:42.318351 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:42.318361 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:42.318371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:42.393293 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:42.393311 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:44.928982 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:44.938908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:44.938968 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:44.963638 1921402 cri.go:96] found id: ""
	I1222 00:36:44.963652 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.963659 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:44.963665 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:44.963723 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:44.988816 1921402 cri.go:96] found id: ""
	I1222 00:36:44.988831 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.988838 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:44.988843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:44.988908 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:45.056183 1921402 cri.go:96] found id: ""
	I1222 00:36:45.056202 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.056212 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:45.056218 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:45.056311 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:45.094388 1921402 cri.go:96] found id: ""
	I1222 00:36:45.094407 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.094416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:45.094422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:45.094503 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:45.141522 1921402 cri.go:96] found id: ""
	I1222 00:36:45.141539 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.141547 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:45.141554 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:45.141632 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:45.184485 1921402 cri.go:96] found id: ""
	I1222 00:36:45.184517 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.184539 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:45.184545 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:45.184691 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:45.244764 1921402 cri.go:96] found id: ""
	I1222 00:36:45.244780 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.244789 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:45.244799 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:45.244810 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:45.323564 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:45.323584 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:45.341732 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:45.341749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:45.426921 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:45.426931 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:45.426941 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:45.496872 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:45.496897 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:48.027828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:48.038486 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:48.038549 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:48.063156 1921402 cri.go:96] found id: ""
	I1222 00:36:48.063170 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.063177 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:48.063183 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:48.063242 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:48.092751 1921402 cri.go:96] found id: ""
	I1222 00:36:48.092764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.092771 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:48.092776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:48.092833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:48.117758 1921402 cri.go:96] found id: ""
	I1222 00:36:48.117771 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.117778 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:48.117784 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:48.117845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:48.156671 1921402 cri.go:96] found id: ""
	I1222 00:36:48.156685 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.156692 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:48.156697 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:48.156759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:48.181935 1921402 cri.go:96] found id: ""
	I1222 00:36:48.181948 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.181961 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:48.181966 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:48.182025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:48.206454 1921402 cri.go:96] found id: ""
	I1222 00:36:48.206468 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.206476 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:48.206481 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:48.206537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:48.230655 1921402 cri.go:96] found id: ""
	I1222 00:36:48.230669 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.230676 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:48.230684 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:48.230694 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:48.297753 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:48.297772 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:48.312560 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:48.312575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:48.405529 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:48.405539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:48.405550 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:48.479216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:48.479235 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.006751 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:51.017983 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:51.018043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:51.045081 1921402 cri.go:96] found id: ""
	I1222 00:36:51.045095 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.045102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:51.045133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:51.045231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:51.072909 1921402 cri.go:96] found id: ""
	I1222 00:36:51.072923 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.072931 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:51.072936 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:51.073002 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:51.101558 1921402 cri.go:96] found id: ""
	I1222 00:36:51.101582 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.101590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:51.101596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:51.101667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:51.128728 1921402 cri.go:96] found id: ""
	I1222 00:36:51.128762 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.128771 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:51.128777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:51.128843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:51.155107 1921402 cri.go:96] found id: ""
	I1222 00:36:51.155123 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.155129 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:51.155135 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:51.155196 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:51.183281 1921402 cri.go:96] found id: ""
	I1222 00:36:51.183295 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.183302 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:51.183308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:51.183365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:51.209022 1921402 cri.go:96] found id: ""
	I1222 00:36:51.209050 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.209057 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:51.209078 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:51.209088 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:51.272611 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:51.272624 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:51.272634 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:51.342544 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:51.342563 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.373712 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:51.373727 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:51.452392 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:51.452419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:53.968915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:53.979167 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:53.979236 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:54.009488 1921402 cri.go:96] found id: ""
	I1222 00:36:54.009513 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.009521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:54.009527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:54.009592 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:54.036904 1921402 cri.go:96] found id: ""
	I1222 00:36:54.036919 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.036927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:54.036932 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:54.036991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:54.065561 1921402 cri.go:96] found id: ""
	I1222 00:36:54.065576 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.065583 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:54.065588 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:54.065651 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:54.092446 1921402 cri.go:96] found id: ""
	I1222 00:36:54.092472 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.092479 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:54.092485 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:54.092554 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:54.124282 1921402 cri.go:96] found id: ""
	I1222 00:36:54.124296 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.124303 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:54.124308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:54.124376 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:54.149069 1921402 cri.go:96] found id: ""
	I1222 00:36:54.149082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.149089 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:54.149095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:54.149150 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:54.173952 1921402 cri.go:96] found id: ""
	I1222 00:36:54.173966 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.173973 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:54.173985 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:54.173996 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:54.243687 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:54.243707 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:54.274613 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:54.274629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:54.347248 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:54.347266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:54.368410 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:54.368429 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:54.442889 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:56.943148 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:56.953207 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:56.953265 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:56.980068 1921402 cri.go:96] found id: ""
	I1222 00:36:56.980082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:56.980089 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:56.980095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:56.980154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:57.009964 1921402 cri.go:96] found id: ""
	I1222 00:36:57.009979 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.009987 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:57.009992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:57.010056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:57.040772 1921402 cri.go:96] found id: ""
	I1222 00:36:57.040788 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.040796 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:57.040801 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:57.040861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:57.066436 1921402 cri.go:96] found id: ""
	I1222 00:36:57.066449 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.066456 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:57.066461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:57.066518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:57.090966 1921402 cri.go:96] found id: ""
	I1222 00:36:57.090980 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.090987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:57.090992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:57.091052 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:57.120134 1921402 cri.go:96] found id: ""
	I1222 00:36:57.120149 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.120156 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:57.120162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:57.120218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:57.145014 1921402 cri.go:96] found id: ""
	I1222 00:36:57.145027 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.145034 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:57.145041 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:57.145052 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:57.214243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:57.214263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:57.248369 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:57.248387 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:57.316316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:57.316337 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:57.331296 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:57.331313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:57.423941 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:59.924191 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:59.934553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:59.934611 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:59.963952 1921402 cri.go:96] found id: ""
	I1222 00:36:59.963967 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.963974 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:59.963980 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:59.964039 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:59.989241 1921402 cri.go:96] found id: ""
	I1222 00:36:59.989254 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.989261 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:59.989267 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:59.989346 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:00.041618 1921402 cri.go:96] found id: ""
	I1222 00:37:00.041645 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.041653 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:00.041659 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:00.041743 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:00.156035 1921402 cri.go:96] found id: ""
	I1222 00:37:00.156051 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.156058 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:00.156066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:00.156135 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:00.209590 1921402 cri.go:96] found id: ""
	I1222 00:37:00.209625 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.209642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:00.209658 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:00.209760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:00.265952 1921402 cri.go:96] found id: ""
	I1222 00:37:00.265968 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.265976 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:00.265982 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:00.266053 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:00.306640 1921402 cri.go:96] found id: ""
	I1222 00:37:00.306669 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.306678 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:00.306692 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:00.306705 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:00.391041 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:00.391062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:00.408447 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:00.408463 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:00.477856 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:00.477866 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:00.477876 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:00.550139 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:00.550158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:03.081702 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:03.091652 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:03.091709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:03.116220 1921402 cri.go:96] found id: ""
	I1222 00:37:03.116234 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.116241 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:03.116247 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:03.116303 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:03.141299 1921402 cri.go:96] found id: ""
	I1222 00:37:03.141313 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.141340 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:03.141345 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:03.141402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:03.166615 1921402 cri.go:96] found id: ""
	I1222 00:37:03.166630 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.166637 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:03.166642 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:03.166706 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:03.196327 1921402 cri.go:96] found id: ""
	I1222 00:37:03.196341 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.196358 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:03.196366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:03.196432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:03.221691 1921402 cri.go:96] found id: ""
	I1222 00:37:03.221705 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.221712 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:03.221717 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:03.221774 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:03.247455 1921402 cri.go:96] found id: ""
	I1222 00:37:03.247470 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.247477 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:03.247482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:03.247540 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:03.277376 1921402 cri.go:96] found id: ""
	I1222 00:37:03.277398 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.277406 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:03.277414 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:03.277424 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:03.345137 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:03.345158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:03.362155 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:03.362174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:03.438149 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:03.438160 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:03.438172 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:03.508243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:03.508262 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.037621 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:06.048285 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:06.048348 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:06.076896 1921402 cri.go:96] found id: ""
	I1222 00:37:06.076911 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.076918 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:06.076924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:06.076985 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:06.102650 1921402 cri.go:96] found id: ""
	I1222 00:37:06.102665 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.102671 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:06.102677 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:06.102732 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:06.127851 1921402 cri.go:96] found id: ""
	I1222 00:37:06.127865 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.127872 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:06.127878 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:06.127934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:06.154598 1921402 cri.go:96] found id: ""
	I1222 00:37:06.154612 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.154619 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:06.154625 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:06.154683 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:06.181128 1921402 cri.go:96] found id: ""
	I1222 00:37:06.181142 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.181149 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:06.181155 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:06.181215 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:06.207749 1921402 cri.go:96] found id: ""
	I1222 00:37:06.207762 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.207770 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:06.207776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:06.207832 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:06.240273 1921402 cri.go:96] found id: ""
	I1222 00:37:06.240287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.240295 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:06.240302 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:06.240313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:06.310353 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:06.310372 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.337909 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:06.337928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:06.413399 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:06.413419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:06.428074 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:06.428091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:06.491546 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:08.991781 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:09.003482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:09.003561 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:09.030801 1921402 cri.go:96] found id: ""
	I1222 00:37:09.030816 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.030823 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:09.030829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:09.030884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:09.063705 1921402 cri.go:96] found id: ""
	I1222 00:37:09.063728 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.063740 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:09.063746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:09.063819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:09.094354 1921402 cri.go:96] found id: ""
	I1222 00:37:09.094370 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.094378 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:09.094384 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:09.094441 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:09.120692 1921402 cri.go:96] found id: ""
	I1222 00:37:09.120706 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.120714 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:09.120719 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:09.120776 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:09.146623 1921402 cri.go:96] found id: ""
	I1222 00:37:09.146642 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.146649 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:09.146654 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:09.146711 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:09.176276 1921402 cri.go:96] found id: ""
	I1222 00:37:09.176290 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.176297 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:09.176302 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:09.176359 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:09.205092 1921402 cri.go:96] found id: ""
	I1222 00:37:09.205106 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.205113 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:09.205120 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:09.205130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:09.271826 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:09.271851 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:09.287713 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:09.287729 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:09.372451 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:09.372461 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:09.372474 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:09.456787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:09.456808 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:11.988321 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:11.998626 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:11.998682 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:12.034498 1921402 cri.go:96] found id: ""
	I1222 00:37:12.034513 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.034521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:12.034527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:12.034588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:12.065008 1921402 cri.go:96] found id: ""
	I1222 00:37:12.065023 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.065033 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:12.065038 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:12.065098 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:12.092442 1921402 cri.go:96] found id: ""
	I1222 00:37:12.092456 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.092463 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:12.092469 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:12.092529 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:12.118395 1921402 cri.go:96] found id: ""
	I1222 00:37:12.118408 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.118416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:12.118421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:12.118476 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:12.143621 1921402 cri.go:96] found id: ""
	I1222 00:37:12.143634 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.143642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:12.143647 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:12.143702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:12.172923 1921402 cri.go:96] found id: ""
	I1222 00:37:12.172937 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.172944 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:12.172949 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:12.173004 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:12.199794 1921402 cri.go:96] found id: ""
	I1222 00:37:12.199808 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.199815 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:12.199822 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:12.199833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:12.263467 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:12.263477 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:12.263488 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:12.334334 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:12.334358 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:12.374086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:12.374104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:12.447671 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:12.447690 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:14.962865 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:14.972963 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:14.973024 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:14.998158 1921402 cri.go:96] found id: ""
	I1222 00:37:14.998172 1921402 logs.go:282] 0 containers: []
	W1222 00:37:14.998179 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:14.998185 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:14.998243 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:15.029851 1921402 cri.go:96] found id: ""
	I1222 00:37:15.029867 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.029874 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:15.029882 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:15.029948 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:15.057647 1921402 cri.go:96] found id: ""
	I1222 00:37:15.057662 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.057668 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:15.057674 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:15.057742 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:15.084111 1921402 cri.go:96] found id: ""
	I1222 00:37:15.084126 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.084134 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:15.084140 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:15.084205 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:15.114963 1921402 cri.go:96] found id: ""
	I1222 00:37:15.114978 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.114984 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:15.114990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:15.115050 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:15.140108 1921402 cri.go:96] found id: ""
	I1222 00:37:15.140122 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.140129 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:15.140134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:15.140192 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:15.169273 1921402 cri.go:96] found id: ""
	I1222 00:37:15.169287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.169294 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:15.169302 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:15.169312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:15.235722 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:15.235741 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:15.250646 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:15.250662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:15.315252 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:15.315262 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:15.315272 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:15.390764 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:15.390784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:17.922554 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:17.932679 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:17.932739 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:17.957865 1921402 cri.go:96] found id: ""
	I1222 00:37:17.957879 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.957886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:17.957891 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:17.957952 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:17.987323 1921402 cri.go:96] found id: ""
	I1222 00:37:17.987337 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.987344 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:17.987349 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:17.987410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:18.017446 1921402 cri.go:96] found id: ""
	I1222 00:37:18.017461 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.017469 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:18.017474 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:18.017538 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:18.043105 1921402 cri.go:96] found id: ""
	I1222 00:37:18.043119 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.043126 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:18.043131 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:18.043188 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:18.069214 1921402 cri.go:96] found id: ""
	I1222 00:37:18.069229 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.069236 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:18.069241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:18.069304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:18.094585 1921402 cri.go:96] found id: ""
	I1222 00:37:18.094598 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.094605 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:18.094610 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:18.094670 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:18.124271 1921402 cri.go:96] found id: ""
	I1222 00:37:18.124292 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.124300 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:18.124307 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:18.124318 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:18.191628 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:18.191645 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:18.206373 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:18.206389 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:18.271908 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:18.271918 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:18.271931 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:18.347020 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:18.347041 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:20.886577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:20.896630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:20.896687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:20.924710 1921402 cri.go:96] found id: ""
	I1222 00:37:20.924724 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.924731 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:20.924737 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:20.924797 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:20.949929 1921402 cri.go:96] found id: ""
	I1222 00:37:20.949944 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.949951 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:20.949956 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:20.950015 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:20.975241 1921402 cri.go:96] found id: ""
	I1222 00:37:20.975254 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.975262 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:20.975273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:20.975332 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:21.007134 1921402 cri.go:96] found id: ""
	I1222 00:37:21.007149 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.007157 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:21.007162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:21.007226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:21.036823 1921402 cri.go:96] found id: ""
	I1222 00:37:21.036837 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.036845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:21.036850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:21.036922 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:21.066186 1921402 cri.go:96] found id: ""
	I1222 00:37:21.066201 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.066208 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:21.066214 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:21.066273 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:21.092594 1921402 cri.go:96] found id: ""
	I1222 00:37:21.092609 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.092616 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:21.092624 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:21.092658 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:21.155743 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:21.155753 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:21.155763 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:21.230114 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:21.230133 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:21.258046 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:21.258062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:21.330592 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:21.330619 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:23.847823 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:23.857930 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:23.857993 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:23.883272 1921402 cri.go:96] found id: ""
	I1222 00:37:23.883286 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.883294 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:23.883300 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:23.883355 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:23.907502 1921402 cri.go:96] found id: ""
	I1222 00:37:23.907515 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.907523 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:23.907528 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:23.907588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:23.932684 1921402 cri.go:96] found id: ""
	I1222 00:37:23.932698 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.932705 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:23.932711 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:23.932769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:23.957636 1921402 cri.go:96] found id: ""
	I1222 00:37:23.957650 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.957657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:23.957662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:23.957719 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:23.982476 1921402 cri.go:96] found id: ""
	I1222 00:37:23.982490 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.982497 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:23.982502 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:23.982560 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:24.014243 1921402 cri.go:96] found id: ""
	I1222 00:37:24.014258 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.014265 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:24.014278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:24.014341 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:24.041726 1921402 cri.go:96] found id: ""
	I1222 00:37:24.041740 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.041748 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:24.041756 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:24.041766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:24.110293 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:24.110312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:24.126712 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:24.126736 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:24.190552 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:24.190564 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:24.190575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:24.262140 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:24.262165 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:26.797351 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:26.808692 1921402 kubeadm.go:602] duration metric: took 4m2.99406476s to restartPrimaryControlPlane
	W1222 00:37:26.808754 1921402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:37:26.808829 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:37:27.218865 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:37:27.231651 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:37:27.239390 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:37:27.239447 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:37:27.246990 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:37:27.247000 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:37:27.247052 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:37:27.254602 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:37:27.254657 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:37:27.261917 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:37:27.269756 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:37:27.269819 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:37:27.277175 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.285102 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:37:27.285164 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.292582 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:37:27.300147 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:37:27.300201 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:37:27.307155 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:37:27.346116 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:37:27.346165 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:37:27.414020 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:37:27.414089 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:37:27.414123 1921402 kubeadm.go:319] OS: Linux
	I1222 00:37:27.414168 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:37:27.414215 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:37:27.414261 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:37:27.414307 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:37:27.414354 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:37:27.414404 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:37:27.414448 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:37:27.414494 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:37:27.414538 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:37:27.481436 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:37:27.481554 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:37:27.481674 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:37:27.489751 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:37:27.493685 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:37:27.493793 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:37:27.493881 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:37:27.493967 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:37:27.494032 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:37:27.494108 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:37:27.494164 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:37:27.494230 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:37:27.494294 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:37:27.494372 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:37:27.494448 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:37:27.494487 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:37:27.494545 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:37:27.957235 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:37:28.242322 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:37:28.521612 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:37:28.976211 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:37:29.260315 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:37:29.260957 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:37:29.263905 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:37:29.266956 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:37:29.267051 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:37:29.267139 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:37:29.268296 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:37:29.283008 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:37:29.283332 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:37:29.290939 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:37:29.291397 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:37:29.291445 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:37:29.423121 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:37:29.423418 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:41:29.424266 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00111221s
	I1222 00:41:29.424291 1921402 kubeadm.go:319] 
	I1222 00:41:29.424347 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:41:29.424379 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:41:29.424483 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:41:29.424487 1921402 kubeadm.go:319] 
	I1222 00:41:29.424591 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:41:29.424622 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:41:29.424652 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:41:29.424655 1921402 kubeadm.go:319] 
	I1222 00:41:29.429048 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:41:29.429508 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:41:29.429617 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:41:29.429851 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:41:29.429855 1921402 kubeadm.go:319] 
	I1222 00:41:29.429925 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:41:29.430059 1921402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00111221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:41:29.430143 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:41:29.837125 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:41:29.849742 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:41:29.849801 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:41:29.857510 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:41:29.857520 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:41:29.857569 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:41:29.865043 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:41:29.865100 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:41:29.872214 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:41:29.879983 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:41:29.880037 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:41:29.887891 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.895656 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:41:29.895711 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.902944 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:41:29.910473 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:41:29.910531 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:41:29.917929 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:41:29.958087 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:41:29.958157 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:41:30.050415 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:41:30.050500 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:41:30.050542 1921402 kubeadm.go:319] OS: Linux
	I1222 00:41:30.050625 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:41:30.050679 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:41:30.050743 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:41:30.050790 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:41:30.050861 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:41:30.050918 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:41:30.050963 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:41:30.051010 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:41:30.051055 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:41:30.118501 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:41:30.118617 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:41:30.118712 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:41:30.125936 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:41:30.127797 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:41:30.127893 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:41:30.129701 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:41:30.129794 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:41:30.129856 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:41:30.129967 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:41:30.130025 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:41:30.130096 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:41:30.130183 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:41:30.130272 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:41:30.130344 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:41:30.130391 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:41:30.130455 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:41:30.344018 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:41:30.899259 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:41:31.157696 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:41:31.268170 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:41:31.664677 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:41:31.665563 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:41:31.668250 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:41:31.669734 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:41:31.669834 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:41:31.670177 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:41:31.671652 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:41:31.687500 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:41:31.687601 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:41:31.694901 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:41:31.695472 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:41:31.695535 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:41:31.830593 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:41:31.830722 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:45:31.830877 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001320631s
	I1222 00:45:31.831182 1921402 kubeadm.go:319] 
	I1222 00:45:31.831295 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:45:31.831352 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:45:31.831682 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:45:31.831690 1921402 kubeadm.go:319] 
	I1222 00:45:31.831881 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:45:31.831936 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:45:31.831990 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:45:31.831994 1921402 kubeadm.go:319] 
	I1222 00:45:31.836486 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:45:31.837376 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:45:31.837502 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:45:31.837782 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:45:31.837816 1921402 kubeadm.go:319] 
	I1222 00:45:31.837891 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:45:31.837949 1921402 kubeadm.go:403] duration metric: took 12m8.05957574s to StartCluster
	I1222 00:45:31.837990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:45:31.838056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:45:31.866824 1921402 cri.go:96] found id: ""
	I1222 00:45:31.866837 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.866844 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:45:31.866850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:45:31.866906 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:45:31.892772 1921402 cri.go:96] found id: ""
	I1222 00:45:31.892797 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.892805 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:45:31.892812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:45:31.892881 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:45:31.919961 1921402 cri.go:96] found id: ""
	I1222 00:45:31.919975 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.919982 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:45:31.919987 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:45:31.920043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:45:31.945683 1921402 cri.go:96] found id: ""
	I1222 00:45:31.945697 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.945704 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:45:31.945710 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:45:31.945770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:45:31.971052 1921402 cri.go:96] found id: ""
	I1222 00:45:31.971066 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.971074 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:45:31.971080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:45:31.971139 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:45:31.997143 1921402 cri.go:96] found id: ""
	I1222 00:45:31.997158 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.997165 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:45:31.997170 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:45:31.997226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:45:32.024698 1921402 cri.go:96] found id: ""
	I1222 00:45:32.024713 1921402 logs.go:282] 0 containers: []
	W1222 00:45:32.024721 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:45:32.024729 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:45:32.024739 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:45:32.092110 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:45:32.092130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:45:32.109516 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:45:32.109532 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:45:32.182932 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:45:32.182942 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:45:32.182952 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:45:32.255366 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:45:32.255386 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:45:32.288418 1921402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:45:32.288449 1921402 out.go:285] * 
	W1222 00:45:32.288511 1921402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.288527 1921402 out.go:285] * 
	W1222 00:45:32.290657 1921402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:45:32.295098 1921402 out.go:203] 
	W1222 00:45:32.297043 1921402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.297089 1921402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:45:32.297111 1921402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:45:32.299905 1921402 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122951619Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=23242d7e-419c-42f0-8bef-ee98feafd2e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123417804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5531d018-c2c9-45be-a1ce-0187a1f831ac name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123836761Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=10e6d9d2-6073-425b-bed1-1e668d53ee0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124244175Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c2632b93-d501-48e1-9830-ac88cf5004de name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124649086Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8dce6ea5-b89a-4098-8ce9-aea233a27c3e name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964031496Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964252675Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964296825Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964375806Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005620606Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005780347Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005818057Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040267534Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040399911Z" level=info msg="Image localhost/kicbase/echo-server:functional-837733 not found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040434683Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-837733 found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852687205Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.85283559Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852880003Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852954134Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886292818Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886432169Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886470281Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.911030791Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=369fa22a-71b8-4625-900e-18b79a9137e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.962795293Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.962862203Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=9c28ba1f-7abf-4263-860e-8a0ef6162d0e name=/runtime.v1.ImageService/ListImages
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:47:52.138949   23667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:47:52.139716   23667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:47:52.141377   23667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:47:52.141861   23667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:47:52.143368   23667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:47:52 up 1 day,  7:30,  0 user,  load average: 0.15, 0.29, 0.48
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:47:49 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:47:50 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 504.
	Dec 22 00:47:50 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:50 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:50 functional-837733 kubelet[23509]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:50 functional-837733 kubelet[23509]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:50 functional-837733 kubelet[23509]: E1222 00:47:50.172769   23509 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:47:50 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:47:50 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:47:50 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 505.
	Dec 22 00:47:50 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:50 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:50 functional-837733 kubelet[23552]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:50 functional-837733 kubelet[23552]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:50 functional-837733 kubelet[23552]: E1222 00:47:50.900300   23552 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:47:50 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:47:50 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:47:51 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 506.
	Dec 22 00:47:51 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:51 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:51 functional-837733 kubelet[23581]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:51 functional-837733 kubelet[23581]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:51 functional-837733 kubelet[23581]: E1222 00:47:51.652590   23581 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:47:51 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:47:51 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (347.547384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (2.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-837733 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-837733 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (53.355986ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-837733 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-837733 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-837733 describe po hello-node-connect: exit status 1 (68.997158ms)

                                                
                                                
** stderr ** 
	E1222 00:47:47.855476 1937810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.857867 1937810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.860699 1937810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.862202 1937810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.863638 1937810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-837733 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-837733 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-837733 logs -l app=hello-node-connect: exit status 1 (67.820898ms)

                                                
                                                
** stderr ** 
	E1222 00:47:47.927216 1937821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.928743 1937821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.930177 1937821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.931567 1937821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-837733 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-837733 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-837733 describe svc hello-node-connect: exit status 1 (59.769061ms)

                                                
                                                
** stderr ** 
	E1222 00:47:47.985904 1937826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.987470 1937826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.988833 1937826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.990199 1937826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.991570 1937826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-837733 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (289.847438ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-837733 image ls                                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /usr/share/ca-certificates/1881219.pem                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image save kicbase/echo-server:functional-837733 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /etc/ssl/certs/18812192.pem                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image rm kicbase/echo-server:functional-837733 --alsologtostderr                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /usr/share/ca-certificates/18812192.pem                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image ls                                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo cat /etc/test/nested/copy/1881219/hosts                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image ls                                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ service │ functional-837733 service list                                                                                                                            │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ image   │ functional-837733 image save --daemon kicbase/echo-server:functional-837733 --alsologtostderr                                                             │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ service │ functional-837733 service list -o json                                                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ ssh     │ functional-837733 ssh echo hello                                                                                                                          │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ service │ functional-837733 service --namespace=default --https --url hello-node                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ service │ functional-837733 service hello-node --url --format={{.IP}}                                                                                               │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ ssh     │ functional-837733 ssh cat /etc/hostname                                                                                                                   │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ tunnel  │ functional-837733 tunnel --alsologtostderr                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ service │ functional-837733 service hello-node --url                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ tunnel  │ functional-837733 tunnel --alsologtostderr                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ tunnel  │ functional-837733 tunnel --alsologtostderr                                                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ addons  │ functional-837733 addons list                                                                                                                             │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ addons  │ functional-837733 addons list -o json                                                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:33:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:33:19.123176 1921402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:33:19.123288 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123297 1921402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:33:19.123301 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123533 1921402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:33:19.123864 1921402 out.go:368] Setting JSON to false
	I1222 00:33:19.124702 1921402 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112549,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:33:19.124753 1921402 start.go:143] virtualization:  
	I1222 00:33:19.129939 1921402 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:33:19.132820 1921402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:33:19.132934 1921402 notify.go:221] Checking for updates...
	I1222 00:33:19.138397 1921402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:33:19.141280 1921402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:33:19.144132 1921402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:33:19.147015 1921402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:33:19.149703 1921402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:33:19.152994 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:19.153132 1921402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:33:19.180144 1921402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:33:19.180247 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.239273 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.229718138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.239382 1921402 docker.go:319] overlay module found
	I1222 00:33:19.242404 1921402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:33:19.245466 1921402 start.go:309] selected driver: docker
	I1222 00:33:19.245476 1921402 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.245601 1921402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:33:19.245707 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.303768 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.294614837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.304252 1921402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:33:19.304275 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:19.304327 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:19.304366 1921402 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.307509 1921402 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:33:19.310426 1921402 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:33:19.313438 1921402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:33:19.316296 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:19.316334 1921402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:33:19.316337 1921402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:33:19.316345 1921402 cache.go:65] Caching tarball of preloaded images
	I1222 00:33:19.316452 1921402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:33:19.316461 1921402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:33:19.316560 1921402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:33:19.335533 1921402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:33:19.335544 1921402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:33:19.335564 1921402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:33:19.335600 1921402 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:33:19.335668 1921402 start.go:364] duration metric: took 47.728µs to acquireMachinesLock for "functional-837733"
	I1222 00:33:19.335688 1921402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:33:19.335699 1921402 fix.go:54] fixHost starting: 
	I1222 00:33:19.335947 1921402 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:33:19.365010 1921402 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:33:19.365031 1921402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:33:19.368348 1921402 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:33:19.368376 1921402 machine.go:94] provisionDockerMachine start ...
	I1222 00:33:19.368449 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.391712 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.392069 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.392078 1921402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:33:19.528918 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.528932 1921402 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:33:19.529008 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.546388 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.546693 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.546705 1921402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:33:19.690177 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.690264 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.708307 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.708609 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.708623 1921402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:33:19.841808 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:33:19.841826 1921402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:33:19.841843 1921402 ubuntu.go:190] setting up certificates
	I1222 00:33:19.841860 1921402 provision.go:84] configureAuth start
	I1222 00:33:19.841918 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:19.861819 1921402 provision.go:143] copyHostCerts
	I1222 00:33:19.861874 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:33:19.861889 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:33:19.861964 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:33:19.862062 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:33:19.862066 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:33:19.862089 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:33:19.862134 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:33:19.862137 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:33:19.862158 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:33:19.862198 1921402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:33:19.952084 1921402 provision.go:177] copyRemoteCerts
	I1222 00:33:19.952138 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:33:19.952182 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.970262 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.085780 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:33:20.107727 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:33:20.128189 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:33:20.148142 1921402 provision.go:87] duration metric: took 306.259546ms to configureAuth
	I1222 00:33:20.148170 1921402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:33:20.148393 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:20.148508 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.170002 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:20.170298 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:20.170316 1921402 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:33:20.512165 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:33:20.512178 1921402 machine.go:97] duration metric: took 1.143796307s to provisionDockerMachine
	I1222 00:33:20.512211 1921402 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:33:20.512232 1921402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:33:20.512310 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:33:20.512363 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.530913 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.629793 1921402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:33:20.633223 1921402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:33:20.633240 1921402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:33:20.633250 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:33:20.633308 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:33:20.633414 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:33:20.633491 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:33:20.633538 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:33:20.641126 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:20.658985 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:33:20.677182 1921402 start.go:296] duration metric: took 164.956569ms for postStartSetup
	I1222 00:33:20.677276 1921402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:33:20.677314 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.694936 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.786654 1921402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:33:20.791424 1921402 fix.go:56] duration metric: took 1.455724646s for fixHost
	I1222 00:33:20.791440 1921402 start.go:83] releasing machines lock for "functional-837733", held for 1.455763874s
	I1222 00:33:20.791545 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:20.808426 1921402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:33:20.808469 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.808475 1921402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:33:20.808537 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.830636 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.834199 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.933012 1921402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:33:21.025436 1921402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:33:21.067278 1921402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:33:21.071776 1921402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:33:21.071868 1921402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:33:21.079967 1921402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:33:21.079982 1921402 start.go:496] detecting cgroup driver to use...
	I1222 00:33:21.080013 1921402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:33:21.080062 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:33:21.095680 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:33:21.108798 1921402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:33:21.108852 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:33:21.124535 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:33:21.138031 1921402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:33:21.254740 1921402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:33:21.374618 1921402 docker.go:234] disabling docker service ...
	I1222 00:33:21.374676 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:33:21.389413 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:33:21.402447 1921402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:33:21.520019 1921402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:33:21.643581 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:33:21.657853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:33:21.671138 1921402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:33:21.671202 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.680093 1921402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:33:21.680158 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.688698 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.697375 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.705901 1921402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:33:21.713923 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.722872 1921402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.731197 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.739610 1921402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:33:21.747277 1921402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:33:21.754578 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:21.862305 1921402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:33:22.033055 1921402 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:33:22.033138 1921402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:33:22.037243 1921402 start.go:564] Will wait 60s for crictl version
	I1222 00:33:22.037298 1921402 ssh_runner.go:195] Run: which crictl
	I1222 00:33:22.040800 1921402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:33:22.068746 1921402 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:33:22.068856 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.098980 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.132885 1921402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:33:22.135905 1921402 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:33:22.151401 1921402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:33:22.158212 1921402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:33:22.160827 1921402 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:33:22.160965 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:22.161027 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.202970 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.202982 1921402 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:33:22.203036 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.229110 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.229123 1921402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:33:22.229129 1921402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:33:22.229228 1921402 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:33:22.229311 1921402 ssh_runner.go:195] Run: crio config
	I1222 00:33:22.300174 1921402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:33:22.300213 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:22.300224 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:22.300232 1921402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:33:22.300257 1921402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:33:22.300399 1921402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:33:22.300509 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:33:22.308005 1921402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:33:22.308080 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:33:22.315446 1921402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:33:22.327549 1921402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:33:22.340301 1921402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1222 00:33:22.356214 1921402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:33:22.360143 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:22.489828 1921402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:33:23.096445 1921402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:33:23.096455 1921402 certs.go:195] generating shared ca certs ...
	I1222 00:33:23.096469 1921402 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:33:23.096601 1921402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:33:23.096676 1921402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:33:23.096682 1921402 certs.go:257] generating profile certs ...
	I1222 00:33:23.096765 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:33:23.096814 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:33:23.096847 1921402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:33:23.097000 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:33:23.097032 1921402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:33:23.097039 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:33:23.097063 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:33:23.097085 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:33:23.097109 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:33:23.097150 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:23.097934 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:33:23.122733 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:33:23.143796 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:33:23.165783 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:33:23.184349 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:33:23.201211 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:33:23.218537 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:33:23.235950 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:33:23.253260 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:33:23.270022 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:33:23.287385 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:33:23.304167 1921402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:33:23.316581 1921402 ssh_runner.go:195] Run: openssl version
	I1222 00:33:23.322651 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.329731 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:33:23.336799 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340358 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340410 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.381351 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:33:23.388858 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.396030 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:33:23.403636 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407354 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407408 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.449165 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:33:23.456643 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.463729 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:33:23.471167 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474893 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474950 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.516093 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:33:23.523732 1921402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:33:23.527686 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:33:23.568782 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:33:23.609619 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:33:23.650314 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:33:23.696111 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:33:23.737413 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:33:23.778391 1921402 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:23.778492 1921402 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:33:23.778592 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.806800 1921402 cri.go:96] found id: ""
	I1222 00:33:23.806880 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:33:23.814609 1921402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:33:23.814618 1921402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:33:23.814668 1921402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:33:23.821980 1921402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.822486 1921402 kubeconfig.go:125] found "functional-837733" server: "https://192.168.49.2:8441"
	I1222 00:33:23.823779 1921402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:33:23.831691 1921402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:18:48.647493207 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:33:22.349448241 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:33:23.831708 1921402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:33:23.831719 1921402 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1222 00:33:23.831773 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.866496 1921402 cri.go:96] found id: ""
	I1222 00:33:23.866558 1921402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:33:23.889134 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:33:23.897048 1921402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 22 00:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 00:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:22 /etc/kubernetes/scheduler.conf
	
	I1222 00:33:23.897106 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:33:23.904740 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:33:23.912683 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.912741 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:33:23.920107 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.927907 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.927961 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.935547 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:33:23.943201 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.943254 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:33:23.950967 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:33:23.959359 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:24.006017 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.352657 1921402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346615684s)
	I1222 00:33:25.352716 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.546706 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.608812 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.656813 1921402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:33:25.656915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.157087 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.657071 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.157877 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.657904 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.657224 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.157645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.657086 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.157778 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.657101 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.657140 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.157596 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.657091 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.157117 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.657826 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.157144 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.657188 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.157633 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.657483 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.157012 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.657072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.657088 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.157578 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.157973 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.657614 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.157096 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.657134 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.157828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.657830 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.157616 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.657109 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.157582 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.657114 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.657017 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.157935 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.657589 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.157909 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.657385 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.157576 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.657671 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.657099 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.157849 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.657233 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.157065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.657643 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.157064 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.657808 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.157659 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.657443 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.157202 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.657314 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.157042 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.657861 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.157451 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.657599 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.157067 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.657586 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.157353 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.656976 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.181504 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.657937 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.657874 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.157195 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.657832 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.157502 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.657928 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.158003 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.657704 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.157139 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.657745 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.157058 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.657102 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.157987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.658025 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.157337 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.656989 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.157695 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.657652 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.157707 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.657722 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.157334 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.657276 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.157900 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.657113 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.157804 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.657609 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.157350 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.657597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.157180 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.658004 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.157770 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.657721 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.157789 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.657835 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.657645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.157603 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.657417 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.157127 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.157737 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.657060 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.157985 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.657010 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.657970 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.657732 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.157680 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.657000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:25.657101 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:25.682430 1921402 cri.go:96] found id: ""
	I1222 00:34:25.682444 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.682451 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:25.682457 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:25.682521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:25.708391 1921402 cri.go:96] found id: ""
	I1222 00:34:25.708406 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.708414 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:25.708420 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:25.708495 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:25.735246 1921402 cri.go:96] found id: ""
	I1222 00:34:25.735261 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.735268 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:25.735273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:25.735330 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:25.764584 1921402 cri.go:96] found id: ""
	I1222 00:34:25.764599 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.764606 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:25.764611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:25.764672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:25.790407 1921402 cri.go:96] found id: ""
	I1222 00:34:25.790421 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.790428 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:25.790434 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:25.790492 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:25.816256 1921402 cri.go:96] found id: ""
	I1222 00:34:25.816270 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.816278 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:25.816283 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:25.816345 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:25.842813 1921402 cri.go:96] found id: ""
	I1222 00:34:25.842828 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.842835 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:25.842843 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:25.842853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:25.914526 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:25.914546 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:25.948008 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:25.948024 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:26.016574 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:26.016596 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:26.032648 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:26.032666 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:26.102628 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.604431 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:28.618535 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:28.618599 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:28.647904 1921402 cri.go:96] found id: ""
	I1222 00:34:28.647920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.647927 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:28.647933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:28.647987 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:28.674397 1921402 cri.go:96] found id: ""
	I1222 00:34:28.674411 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.674418 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:28.674423 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:28.674481 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:28.704093 1921402 cri.go:96] found id: ""
	I1222 00:34:28.704107 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.704114 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:28.704129 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:28.704187 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:28.729449 1921402 cri.go:96] found id: ""
	I1222 00:34:28.729463 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.729470 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:28.729475 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:28.729535 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:28.754277 1921402 cri.go:96] found id: ""
	I1222 00:34:28.754292 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.754300 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:28.754305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:28.754366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:28.779419 1921402 cri.go:96] found id: ""
	I1222 00:34:28.779445 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.779453 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:28.779459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:28.779526 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:28.804826 1921402 cri.go:96] found id: ""
	I1222 00:34:28.804841 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.804848 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:28.804855 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:28.804866 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:28.871702 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:28.871720 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:28.886327 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:28.886342 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:28.950915 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.950935 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:28.950945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:29.021787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:29.021807 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.549965 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:31.559848 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:31.559909 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:31.586473 1921402 cri.go:96] found id: ""
	I1222 00:34:31.586487 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.586494 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:31.586500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:31.586557 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:31.619635 1921402 cri.go:96] found id: ""
	I1222 00:34:31.619649 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.619657 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:31.619662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:31.619718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:31.649990 1921402 cri.go:96] found id: ""
	I1222 00:34:31.650004 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.650011 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:31.650017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:31.650079 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:31.676339 1921402 cri.go:96] found id: ""
	I1222 00:34:31.676354 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.676361 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:31.676366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:31.676424 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:31.704084 1921402 cri.go:96] found id: ""
	I1222 00:34:31.704098 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.704105 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:31.704110 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:31.704170 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:31.728706 1921402 cri.go:96] found id: ""
	I1222 00:34:31.728720 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.728727 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:31.728732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:31.728790 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:31.756276 1921402 cri.go:96] found id: ""
	I1222 00:34:31.756290 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.756297 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:31.756304 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:31.756314 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:31.825066 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:31.825085 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.853826 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:31.853841 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:31.920106 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:31.920125 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:31.934967 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:31.934983 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:31.996347 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.497426 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:34.507616 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:34.507676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:34.531742 1921402 cri.go:96] found id: ""
	I1222 00:34:34.531755 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.531763 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:34.531769 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:34.531843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:34.559670 1921402 cri.go:96] found id: ""
	I1222 00:34:34.559683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.559691 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:34.559695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:34.559762 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:34.587516 1921402 cri.go:96] found id: ""
	I1222 00:34:34.587531 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.587538 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:34.587543 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:34.587621 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:34.618669 1921402 cri.go:96] found id: ""
	I1222 00:34:34.618683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.618690 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:34.618695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:34.618754 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:34.655347 1921402 cri.go:96] found id: ""
	I1222 00:34:34.655361 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.655368 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:34.655373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:34.655429 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:34.680612 1921402 cri.go:96] found id: ""
	I1222 00:34:34.680626 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.680643 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:34.680648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:34.680718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:34.706049 1921402 cri.go:96] found id: ""
	I1222 00:34:34.706070 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.706078 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:34.706085 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:34.706095 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:34.774755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:34.774774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:34.789897 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:34.789917 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:34.854675 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.854686 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:34.854696 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:34.923449 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:34.923469 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.451135 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:37.461580 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:37.461643 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:37.488448 1921402 cri.go:96] found id: ""
	I1222 00:34:37.488474 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.488482 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:37.488488 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:37.488544 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:37.513499 1921402 cri.go:96] found id: ""
	I1222 00:34:37.513514 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.513526 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:37.513531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:37.513595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:37.539878 1921402 cri.go:96] found id: ""
	I1222 00:34:37.539892 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.539899 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:37.539905 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:37.539979 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:37.568838 1921402 cri.go:96] found id: ""
	I1222 00:34:37.568853 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.568860 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:37.568866 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:37.568934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:37.604110 1921402 cri.go:96] found id: ""
	I1222 00:34:37.604124 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.604131 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:37.604156 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:37.604212 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:37.639214 1921402 cri.go:96] found id: ""
	I1222 00:34:37.639228 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.639235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:37.639241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:37.639301 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:37.674749 1921402 cri.go:96] found id: ""
	I1222 00:34:37.674763 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.674771 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:37.674779 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:37.674790 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.709493 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:37.709513 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:37.781144 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:37.781164 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:37.797445 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:37.797461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:37.864668 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:37.864678 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:37.864688 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.444815 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:40.456172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:40.456239 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:40.482752 1921402 cri.go:96] found id: ""
	I1222 00:34:40.482774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.482781 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:40.482786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:40.482846 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:40.508000 1921402 cri.go:96] found id: ""
	I1222 00:34:40.508014 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.508021 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:40.508026 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:40.508087 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:40.533866 1921402 cri.go:96] found id: ""
	I1222 00:34:40.533881 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.533889 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:40.533894 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:40.533955 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:40.559781 1921402 cri.go:96] found id: ""
	I1222 00:34:40.559794 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.559802 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:40.559807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:40.559866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:40.586989 1921402 cri.go:96] found id: ""
	I1222 00:34:40.587003 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.587010 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:40.587016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:40.587074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:40.619369 1921402 cri.go:96] found id: ""
	I1222 00:34:40.619384 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.619400 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:40.619405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:40.619462 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:40.648846 1921402 cri.go:96] found id: ""
	I1222 00:34:40.648870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.648878 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:40.648886 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:40.648896 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:40.720161 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:40.720180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:40.736116 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:40.736141 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:40.808179 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:40.808193 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:40.808204 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.876486 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:40.876505 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.409595 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:43.419504 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:43.419564 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:43.444338 1921402 cri.go:96] found id: ""
	I1222 00:34:43.444360 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.444368 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:43.444373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:43.444430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:43.471938 1921402 cri.go:96] found id: ""
	I1222 00:34:43.471953 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.471960 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:43.471965 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:43.472025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:43.497576 1921402 cri.go:96] found id: ""
	I1222 00:34:43.497590 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.497597 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:43.497611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:43.497667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:43.526748 1921402 cri.go:96] found id: ""
	I1222 00:34:43.526762 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.526769 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:43.526775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:43.526838 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:43.556676 1921402 cri.go:96] found id: ""
	I1222 00:34:43.556691 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.556698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:43.556704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:43.556766 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:43.584425 1921402 cri.go:96] found id: ""
	I1222 00:34:43.584440 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.584447 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:43.584453 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:43.584514 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:43.619760 1921402 cri.go:96] found id: ""
	I1222 00:34:43.619774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.619782 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:43.619789 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:43.619813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:43.693963 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:43.693985 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.722757 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:43.722773 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:43.790316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:43.790336 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:43.804810 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:43.804829 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:43.866141 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.366416 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:46.376013 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:46.376073 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:46.402801 1921402 cri.go:96] found id: ""
	I1222 00:34:46.402815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.402822 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:46.402828 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:46.402884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:46.431904 1921402 cri.go:96] found id: ""
	I1222 00:34:46.431920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.431927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:46.431933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:46.431991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:46.456574 1921402 cri.go:96] found id: ""
	I1222 00:34:46.456588 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.456594 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:46.456600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:46.456656 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:46.481542 1921402 cri.go:96] found id: ""
	I1222 00:34:46.481556 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.481563 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:46.481569 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:46.481627 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:46.507668 1921402 cri.go:96] found id: ""
	I1222 00:34:46.507682 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.507689 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:46.507694 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:46.507756 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:46.531662 1921402 cri.go:96] found id: ""
	I1222 00:34:46.531675 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.531683 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:46.531688 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:46.531746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:46.558345 1921402 cri.go:96] found id: ""
	I1222 00:34:46.558359 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.558367 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:46.558375 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:46.558385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:46.630348 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:46.630367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:46.648136 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:46.648153 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:46.720659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.720669 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:46.720689 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:46.790237 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:46.790255 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.320987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:49.331027 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:49.331083 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:49.355873 1921402 cri.go:96] found id: ""
	I1222 00:34:49.355887 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.355903 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:49.355909 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:49.355967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:49.381127 1921402 cri.go:96] found id: ""
	I1222 00:34:49.381151 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.381158 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:49.381163 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:49.381231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:49.407961 1921402 cri.go:96] found id: ""
	I1222 00:34:49.407983 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.407990 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:49.407995 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:49.408062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:49.437856 1921402 cri.go:96] found id: ""
	I1222 00:34:49.437870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.437877 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:49.437883 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:49.437940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:49.462901 1921402 cri.go:96] found id: ""
	I1222 00:34:49.462914 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.462921 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:49.462927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:49.462981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:49.487289 1921402 cri.go:96] found id: ""
	I1222 00:34:49.487303 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.487309 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:49.487314 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:49.487372 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:49.513186 1921402 cri.go:96] found id: ""
	I1222 00:34:49.513200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.513208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:49.513215 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:49.513225 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:49.527889 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:49.527904 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:49.591170 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:49.591180 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:49.591189 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:49.670799 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:49.670817 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.698837 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:49.698853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.268261 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:52.278246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:52.278304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:52.303014 1921402 cri.go:96] found id: ""
	I1222 00:34:52.303035 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.303043 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:52.303048 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:52.303114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:52.331907 1921402 cri.go:96] found id: ""
	I1222 00:34:52.331921 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.331928 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:52.331933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:52.331989 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:52.357227 1921402 cri.go:96] found id: ""
	I1222 00:34:52.357240 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.357247 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:52.357253 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:52.357315 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:52.383730 1921402 cri.go:96] found id: ""
	I1222 00:34:52.383745 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.383752 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:52.383758 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:52.383819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:52.409605 1921402 cri.go:96] found id: ""
	I1222 00:34:52.409620 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.409636 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:52.409641 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:52.409701 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:52.438817 1921402 cri.go:96] found id: ""
	I1222 00:34:52.438831 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.438838 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:52.438843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:52.438901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:52.464983 1921402 cri.go:96] found id: ""
	I1222 00:34:52.465007 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.465016 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:52.465027 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:52.465038 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:52.533460 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:52.533470 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:52.533493 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:52.602597 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:52.602617 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:52.636250 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:52.636266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.708981 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:52.709002 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.224727 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:55.235168 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:55.235225 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:55.260911 1921402 cri.go:96] found id: ""
	I1222 00:34:55.260926 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.260933 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:55.260938 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:55.260994 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:55.286615 1921402 cri.go:96] found id: ""
	I1222 00:34:55.286634 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.286651 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:55.286657 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:55.286736 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:55.311470 1921402 cri.go:96] found id: ""
	I1222 00:34:55.311484 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.311492 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:55.311497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:55.311555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:55.336659 1921402 cri.go:96] found id: ""
	I1222 00:34:55.336673 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.336689 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:55.336695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:55.336755 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:55.368789 1921402 cri.go:96] found id: ""
	I1222 00:34:55.368815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.368823 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:55.368829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:55.368895 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:55.394360 1921402 cri.go:96] found id: ""
	I1222 00:34:55.394374 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.394381 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:55.394387 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:55.394448 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:55.419182 1921402 cri.go:96] found id: ""
	I1222 00:34:55.419196 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.419204 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:55.419211 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:55.419224 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:55.486298 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:55.486316 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.501160 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:55.501179 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:55.569075 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:55.569085 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:55.569105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:55.643182 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:55.643207 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.182663 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:58.193476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:58.193537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:58.219859 1921402 cri.go:96] found id: ""
	I1222 00:34:58.219873 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.219880 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:58.219886 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:58.219944 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:58.245686 1921402 cri.go:96] found id: ""
	I1222 00:34:58.245700 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.245707 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:58.245712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:58.245769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:58.272173 1921402 cri.go:96] found id: ""
	I1222 00:34:58.272187 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.272194 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:58.272199 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:58.272258 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:58.296505 1921402 cri.go:96] found id: ""
	I1222 00:34:58.296519 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.296526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:58.296532 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:58.296590 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:58.321897 1921402 cri.go:96] found id: ""
	I1222 00:34:58.321911 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.321919 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:58.321924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:58.321981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:58.347185 1921402 cri.go:96] found id: ""
	I1222 00:34:58.347200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.347206 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:58.347212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:58.347272 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:58.371593 1921402 cri.go:96] found id: ""
	I1222 00:34:58.371607 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.371614 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:58.371621 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:58.371631 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:58.439841 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:58.439861 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.471817 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:58.471833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:58.540043 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:58.540062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:58.556739 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:58.556755 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:58.632107 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.133446 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:01.145134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:01.145201 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:01.174836 1921402 cri.go:96] found id: ""
	I1222 00:35:01.174853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.174861 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:01.174867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:01.174943 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:01.202691 1921402 cri.go:96] found id: ""
	I1222 00:35:01.202706 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.202715 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:01.202721 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:01.202785 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:01.232555 1921402 cri.go:96] found id: ""
	I1222 00:35:01.232578 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.232586 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:01.232592 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:01.232687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:01.260828 1921402 cri.go:96] found id: ""
	I1222 00:35:01.260853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.260861 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:01.260867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:01.260954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:01.290608 1921402 cri.go:96] found id: ""
	I1222 00:35:01.290623 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.290630 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:01.290635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:01.290721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:01.321704 1921402 cri.go:96] found id: ""
	I1222 00:35:01.321719 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.321726 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:01.321732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:01.321802 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:01.349512 1921402 cri.go:96] found id: ""
	I1222 00:35:01.349526 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.349533 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:01.349540 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:01.349551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:01.385559 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:01.385575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:01.457873 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:01.457900 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:01.473910 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:01.473928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:01.540419 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.540431 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:01.540442 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.114962 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:04.126202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:04.126262 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:04.157781 1921402 cri.go:96] found id: ""
	I1222 00:35:04.157795 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.157802 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:04.157807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:04.157868 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:04.185090 1921402 cri.go:96] found id: ""
	I1222 00:35:04.185104 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.185114 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:04.185119 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:04.185189 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:04.210432 1921402 cri.go:96] found id: ""
	I1222 00:35:04.210446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.210453 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:04.210458 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:04.210518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:04.235769 1921402 cri.go:96] found id: ""
	I1222 00:35:04.235783 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.235791 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:04.235797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:04.235853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:04.260844 1921402 cri.go:96] found id: ""
	I1222 00:35:04.260858 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.260867 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:04.260872 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:04.260940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:04.286088 1921402 cri.go:96] found id: ""
	I1222 00:35:04.286103 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.286110 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:04.286115 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:04.286171 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:04.311902 1921402 cri.go:96] found id: ""
	I1222 00:35:04.311916 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.311923 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:04.311930 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:04.311945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:04.383800 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:04.383820 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:04.398944 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:04.398960 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:04.463648 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:04.463658 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:04.463668 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.533070 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:04.533091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:07.062949 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:07.072997 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:07.073058 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:07.104724 1921402 cri.go:96] found id: ""
	I1222 00:35:07.104739 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.104746 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:07.104751 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:07.104808 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:07.142683 1921402 cri.go:96] found id: ""
	I1222 00:35:07.142697 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.142704 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:07.142712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:07.142772 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:07.168569 1921402 cri.go:96] found id: ""
	I1222 00:35:07.168583 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.168590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:07.168596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:07.168654 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:07.196279 1921402 cri.go:96] found id: ""
	I1222 00:35:07.196295 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.196302 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:07.196307 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:07.196366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:07.220862 1921402 cri.go:96] found id: ""
	I1222 00:35:07.220876 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.220883 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:07.220888 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:07.220967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:07.247783 1921402 cri.go:96] found id: ""
	I1222 00:35:07.247797 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.247804 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:07.247810 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:07.247866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:07.272632 1921402 cri.go:96] found id: ""
	I1222 00:35:07.272658 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.272668 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:07.272675 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:07.272685 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:07.342006 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:07.342026 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:07.357444 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:07.357461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:07.428354 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:07.428365 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:07.428376 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:07.499031 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:07.499051 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.035662 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:10.047188 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:10.047256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:10.074669 1921402 cri.go:96] found id: ""
	I1222 00:35:10.074694 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.074702 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:10.074708 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:10.074775 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:10.111688 1921402 cri.go:96] found id: ""
	I1222 00:35:10.111702 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.111710 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:10.111716 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:10.111816 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:10.141600 1921402 cri.go:96] found id: ""
	I1222 00:35:10.141614 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.141622 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:10.141635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:10.141697 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:10.170620 1921402 cri.go:96] found id: ""
	I1222 00:35:10.170635 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.170642 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:10.170648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:10.170709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:10.196076 1921402 cri.go:96] found id: ""
	I1222 00:35:10.196090 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.196098 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:10.196111 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:10.196173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:10.222435 1921402 cri.go:96] found id: ""
	I1222 00:35:10.222449 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.222456 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:10.222461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:10.222518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:10.248319 1921402 cri.go:96] found id: ""
	I1222 00:35:10.248333 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.248350 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:10.248356 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:10.248367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:10.263245 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:10.263267 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:10.325942 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:10.325966 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:10.325977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:10.395942 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:10.395961 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.423671 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:10.423686 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:12.990380 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:13.000511 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:13.000585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:13.028285 1921402 cri.go:96] found id: ""
	I1222 00:35:13.028299 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.028306 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:13.028311 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:13.028380 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:13.055795 1921402 cri.go:96] found id: ""
	I1222 00:35:13.055809 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.055816 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:13.055821 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:13.055878 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:13.083562 1921402 cri.go:96] found id: ""
	I1222 00:35:13.083576 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.083584 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:13.083589 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:13.083647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:13.123340 1921402 cri.go:96] found id: ""
	I1222 00:35:13.123353 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.123360 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:13.123366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:13.123422 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:13.160293 1921402 cri.go:96] found id: ""
	I1222 00:35:13.160306 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.160314 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:13.160321 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:13.160393 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:13.186389 1921402 cri.go:96] found id: ""
	I1222 00:35:13.186403 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.186410 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:13.186415 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:13.186470 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:13.214187 1921402 cri.go:96] found id: ""
	I1222 00:35:13.214202 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.214208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:13.214216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:13.214228 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:13.241162 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:13.241178 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:13.306852 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:13.306871 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:13.322142 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:13.322158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:13.389384 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:13.389394 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:13.389419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:15.963833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:15.973870 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:15.973930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:15.999121 1921402 cri.go:96] found id: ""
	I1222 00:35:15.999135 1921402 logs.go:282] 0 containers: []
	W1222 00:35:15.999142 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:15.999147 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:15.999203 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:16.027303 1921402 cri.go:96] found id: ""
	I1222 00:35:16.027317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.027324 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:16.027330 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:16.027390 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:16.056726 1921402 cri.go:96] found id: ""
	I1222 00:35:16.056740 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.056747 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:16.056753 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:16.056809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:16.082920 1921402 cri.go:96] found id: ""
	I1222 00:35:16.082934 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.082941 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:16.082946 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:16.083011 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:16.120145 1921402 cri.go:96] found id: ""
	I1222 00:35:16.120160 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.120167 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:16.120172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:16.120228 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:16.149800 1921402 cri.go:96] found id: ""
	I1222 00:35:16.149814 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.149821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:16.149827 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:16.149883 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:16.175126 1921402 cri.go:96] found id: ""
	I1222 00:35:16.175140 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.175147 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:16.175155 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:16.175168 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:16.241780 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:16.241800 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:16.256196 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:16.256212 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:16.317803 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:16.317813 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:16.317824 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:16.386643 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:16.386662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:18.918855 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:18.928947 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:18.929009 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:18.954131 1921402 cri.go:96] found id: ""
	I1222 00:35:18.954151 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.954158 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:18.954169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:18.954229 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:18.978729 1921402 cri.go:96] found id: ""
	I1222 00:35:18.978780 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.978788 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:18.978793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:18.978859 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:19.006460 1921402 cri.go:96] found id: ""
	I1222 00:35:19.006474 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.006482 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:19.006487 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:19.006547 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:19.035802 1921402 cri.go:96] found id: ""
	I1222 00:35:19.035818 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.035825 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:19.035830 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:19.035889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:19.061703 1921402 cri.go:96] found id: ""
	I1222 00:35:19.061716 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.061723 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:19.061729 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:19.061789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:19.094243 1921402 cri.go:96] found id: ""
	I1222 00:35:19.094256 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.094264 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:19.094269 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:19.094324 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:19.135211 1921402 cri.go:96] found id: ""
	I1222 00:35:19.135238 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.135246 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:19.135253 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:19.135263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:19.203366 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:19.203376 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:19.203385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:19.275567 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:19.275585 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:19.303090 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:19.303105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:19.372108 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:19.372127 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:21.888736 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:21.899401 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:21.899461 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:21.924030 1921402 cri.go:96] found id: ""
	I1222 00:35:21.924045 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.924052 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:21.924057 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:21.924114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:21.950593 1921402 cri.go:96] found id: ""
	I1222 00:35:21.950606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.950613 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:21.950618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:21.950676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:21.978354 1921402 cri.go:96] found id: ""
	I1222 00:35:21.978368 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.978375 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:21.978380 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:21.978437 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:22.005709 1921402 cri.go:96] found id: ""
	I1222 00:35:22.005724 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.005734 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:22.005741 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:22.005807 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:22.033771 1921402 cri.go:96] found id: ""
	I1222 00:35:22.033785 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.033793 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:22.033799 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:22.033862 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:22.063584 1921402 cri.go:96] found id: ""
	I1222 00:35:22.063598 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.063606 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:22.063611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:22.063672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:22.090629 1921402 cri.go:96] found id: ""
	I1222 00:35:22.090643 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.090651 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:22.090658 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:22.090669 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:22.172686 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:22.172713 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:22.192440 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:22.192459 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:22.261299 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:22.261309 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:22.261339 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:22.337282 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:22.337301 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:24.868746 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:24.878846 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:24.878904 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:24.903813 1921402 cri.go:96] found id: ""
	I1222 00:35:24.903828 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.903835 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:24.903841 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:24.903901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:24.930142 1921402 cri.go:96] found id: ""
	I1222 00:35:24.930157 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.930164 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:24.930169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:24.930224 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:24.954989 1921402 cri.go:96] found id: ""
	I1222 00:35:24.955009 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.955016 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:24.955022 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:24.955082 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:24.984399 1921402 cri.go:96] found id: ""
	I1222 00:35:24.984413 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.984421 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:24.984426 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:24.984489 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:25.016621 1921402 cri.go:96] found id: ""
	I1222 00:35:25.016637 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.016644 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:25.016650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:25.016714 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:25.044807 1921402 cri.go:96] found id: ""
	I1222 00:35:25.044821 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.044830 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:25.044835 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:25.044926 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:25.071499 1921402 cri.go:96] found id: ""
	I1222 00:35:25.071514 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.071521 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:25.071530 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:25.071541 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:25.149156 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:25.149180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:25.168790 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:25.168813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:25.235601 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:25.235611 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:25.235622 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:25.306834 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:25.306854 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:27.843405 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:27.854516 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:27.854584 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:27.881946 1921402 cri.go:96] found id: ""
	I1222 00:35:27.881960 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.881967 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:27.881972 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:27.882034 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:27.911889 1921402 cri.go:96] found id: ""
	I1222 00:35:27.911903 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.911922 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:27.911927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:27.911984 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:27.938642 1921402 cri.go:96] found id: ""
	I1222 00:35:27.938656 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.938687 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:27.938693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:27.938770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:27.965296 1921402 cri.go:96] found id: ""
	I1222 00:35:27.965317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.965336 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:27.965342 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:27.965410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:27.990233 1921402 cri.go:96] found id: ""
	I1222 00:35:27.990247 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.990254 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:27.990260 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:27.990316 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:28.017956 1921402 cri.go:96] found id: ""
	I1222 00:35:28.017970 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.017986 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:28.017993 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:28.018051 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:28.046152 1921402 cri.go:96] found id: ""
	I1222 00:35:28.046174 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.046182 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:28.046189 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:28.046200 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:28.114636 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:28.114656 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:28.131317 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:28.131333 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:28.201574 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:28.201584 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:28.201594 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:28.273807 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:28.273827 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:30.802833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:30.812760 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:30.812828 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:30.837592 1921402 cri.go:96] found id: ""
	I1222 00:35:30.837606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.837613 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:30.837618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:30.837678 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:30.866275 1921402 cri.go:96] found id: ""
	I1222 00:35:30.866289 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.866296 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:30.866301 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:30.866360 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:30.891264 1921402 cri.go:96] found id: ""
	I1222 00:35:30.891279 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.891286 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:30.891292 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:30.891350 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:30.915256 1921402 cri.go:96] found id: ""
	I1222 00:35:30.915270 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.915277 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:30.915288 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:30.915344 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:30.939875 1921402 cri.go:96] found id: ""
	I1222 00:35:30.939888 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.939895 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:30.939901 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:30.939959 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:30.964213 1921402 cri.go:96] found id: ""
	I1222 00:35:30.964227 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.964235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:30.964240 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:30.964298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:30.989063 1921402 cri.go:96] found id: ""
	I1222 00:35:30.989076 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.989084 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:30.989091 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:30.989104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:31.055959 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:31.055969 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:31.055980 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:31.130704 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:31.130724 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:31.175086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:31.175102 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:31.241755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:31.241774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:33.758213 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:33.768479 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:33.768543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:33.800290 1921402 cri.go:96] found id: ""
	I1222 00:35:33.800304 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.800311 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:33.800317 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:33.800374 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:33.828235 1921402 cri.go:96] found id: ""
	I1222 00:35:33.828249 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.828256 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:33.828261 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:33.828319 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:33.853431 1921402 cri.go:96] found id: ""
	I1222 00:35:33.853446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.853454 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:33.853459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:33.853517 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:33.878506 1921402 cri.go:96] found id: ""
	I1222 00:35:33.878520 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.878526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:33.878531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:33.878589 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:33.903278 1921402 cri.go:96] found id: ""
	I1222 00:35:33.903292 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.903299 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:33.903305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:33.903361 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:33.931624 1921402 cri.go:96] found id: ""
	I1222 00:35:33.931638 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.931645 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:33.931650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:33.931709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:33.960994 1921402 cri.go:96] found id: ""
	I1222 00:35:33.961013 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.961029 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:33.961042 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:33.961053 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:34.028536 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:34.028559 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:34.043699 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:34.043717 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:34.113813 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:34.113822 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:34.113832 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:34.196400 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:34.196421 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:36.727299 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:36.737834 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:36.737892 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:36.762640 1921402 cri.go:96] found id: ""
	I1222 00:35:36.762654 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.762661 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:36.762666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:36.762722 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:36.789016 1921402 cri.go:96] found id: ""
	I1222 00:35:36.789030 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.789036 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:36.789041 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:36.789097 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:36.814574 1921402 cri.go:96] found id: ""
	I1222 00:35:36.814588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.814595 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:36.814600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:36.814661 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:36.839346 1921402 cri.go:96] found id: ""
	I1222 00:35:36.839361 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.839368 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:36.839374 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:36.839430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:36.868832 1921402 cri.go:96] found id: ""
	I1222 00:35:36.868846 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.868853 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:36.868858 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:36.868930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:36.894581 1921402 cri.go:96] found id: ""
	I1222 00:35:36.894595 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.894602 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:36.894607 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:36.894664 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:36.924019 1921402 cri.go:96] found id: ""
	I1222 00:35:36.924033 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.924040 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:36.924047 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:36.924059 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:36.990902 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:36.990921 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:37.008849 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:37.008867 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:37.083336 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:37.083345 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:37.083356 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:37.173618 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:37.173638 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:39.706094 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:39.716405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:39.716478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:39.746846 1921402 cri.go:96] found id: ""
	I1222 00:35:39.746861 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.746868 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:39.746873 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:39.746933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:39.775788 1921402 cri.go:96] found id: ""
	I1222 00:35:39.775802 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.775810 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:39.775815 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:39.775872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:39.800503 1921402 cri.go:96] found id: ""
	I1222 00:35:39.800523 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.800531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:39.800536 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:39.800595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:39.829015 1921402 cri.go:96] found id: ""
	I1222 00:35:39.829028 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.829035 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:39.829040 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:39.829096 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:39.856667 1921402 cri.go:96] found id: ""
	I1222 00:35:39.856680 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.856688 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:39.856693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:39.856752 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:39.881236 1921402 cri.go:96] found id: ""
	I1222 00:35:39.881259 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.881266 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:39.881272 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:39.881365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:39.907774 1921402 cri.go:96] found id: ""
	I1222 00:35:39.907788 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.907795 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:39.907803 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:39.907814 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:39.973961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:39.973979 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:39.988619 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:39.988633 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:40.060659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:40.060671 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:40.060682 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:40.133570 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:40.133590 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:42.668577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:42.678942 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:42.678998 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:42.708217 1921402 cri.go:96] found id: ""
	I1222 00:35:42.708231 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.708237 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:42.708243 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:42.708298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:42.733470 1921402 cri.go:96] found id: ""
	I1222 00:35:42.733484 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.733491 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:42.733497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:42.733555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:42.758025 1921402 cri.go:96] found id: ""
	I1222 00:35:42.758039 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.758050 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:42.758056 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:42.758114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:42.783070 1921402 cri.go:96] found id: ""
	I1222 00:35:42.783084 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.783092 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:42.783097 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:42.783156 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:42.813432 1921402 cri.go:96] found id: ""
	I1222 00:35:42.813447 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.813454 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:42.813459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:42.813522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:42.841107 1921402 cri.go:96] found id: ""
	I1222 00:35:42.841121 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.841137 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:42.841143 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:42.841208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:42.866521 1921402 cri.go:96] found id: ""
	I1222 00:35:42.866535 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.866543 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:42.866551 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:42.866562 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:42.933074 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:42.933092 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:42.950000 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:42.950025 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:43.021528 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:43.021539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:43.021551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:43.096285 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:43.096308 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:45.625156 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:45.635680 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:45.635759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:45.665749 1921402 cri.go:96] found id: ""
	I1222 00:35:45.665763 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.665770 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:45.665775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:45.665833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:45.692343 1921402 cri.go:96] found id: ""
	I1222 00:35:45.692357 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.692364 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:45.692370 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:45.692432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:45.721093 1921402 cri.go:96] found id: ""
	I1222 00:35:45.721108 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.721115 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:45.721120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:45.721180 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:45.747092 1921402 cri.go:96] found id: ""
	I1222 00:35:45.747107 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.747114 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:45.747120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:45.747177 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:45.776676 1921402 cri.go:96] found id: ""
	I1222 00:35:45.776691 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.776698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:45.776704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:45.776760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:45.802786 1921402 cri.go:96] found id: ""
	I1222 00:35:45.802800 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.802808 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:45.802814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:45.802873 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:45.828248 1921402 cri.go:96] found id: ""
	I1222 00:35:45.828269 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.828279 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:45.828287 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:45.828299 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:45.844960 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:45.844976 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:45.915742 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:45.915756 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:45.915766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:45.994270 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:45.994295 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:46.025259 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:46.025275 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:48.596430 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:48.606663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:48.606725 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:48.635341 1921402 cri.go:96] found id: ""
	I1222 00:35:48.635356 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.635363 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:48.635369 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:48.635430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:48.661161 1921402 cri.go:96] found id: ""
	I1222 00:35:48.661184 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.661191 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:48.661196 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:48.661259 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:48.690450 1921402 cri.go:96] found id: ""
	I1222 00:35:48.690464 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.690470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:48.690476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:48.690537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:48.716347 1921402 cri.go:96] found id: ""
	I1222 00:35:48.716363 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.716388 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:48.716409 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:48.716477 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:48.746573 1921402 cri.go:96] found id: ""
	I1222 00:35:48.746588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.746595 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:48.746600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:48.746659 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:48.772755 1921402 cri.go:96] found id: ""
	I1222 00:35:48.772770 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.772777 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:48.772783 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:48.772843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:48.799586 1921402 cri.go:96] found id: ""
	I1222 00:35:48.799600 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.799607 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:48.799615 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:48.799629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:48.860067 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:48.860076 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:48.860087 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:48.930351 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:48.930371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:48.960079 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:48.960099 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:49.028235 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:49.028254 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.545623 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:51.556102 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:51.556160 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:51.584081 1921402 cri.go:96] found id: ""
	I1222 00:35:51.584095 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.584102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:51.584108 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:51.584166 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:51.608602 1921402 cri.go:96] found id: ""
	I1222 00:35:51.608616 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.608623 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:51.608628 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:51.608684 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:51.638215 1921402 cri.go:96] found id: ""
	I1222 00:35:51.638229 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.638236 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:51.638242 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:51.638300 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:51.673979 1921402 cri.go:96] found id: ""
	I1222 00:35:51.673993 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.674000 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:51.674005 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:51.674062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:51.703793 1921402 cri.go:96] found id: ""
	I1222 00:35:51.703807 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.703814 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:51.703820 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:51.703879 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:51.731106 1921402 cri.go:96] found id: ""
	I1222 00:35:51.731120 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.731128 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:51.731133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:51.731191 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:51.758738 1921402 cri.go:96] found id: ""
	I1222 00:35:51.758752 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.758759 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:51.758766 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:51.758776 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:51.788397 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:51.788413 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:51.855218 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:51.855236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.870464 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:51.870480 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:51.935854 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:51.935864 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:51.935874 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:54.506346 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:54.516494 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:54.516555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:54.542861 1921402 cri.go:96] found id: ""
	I1222 00:35:54.542877 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.542884 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:54.542889 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:54.542947 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:54.567393 1921402 cri.go:96] found id: ""
	I1222 00:35:54.567409 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.567417 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:54.567422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:54.567480 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:54.596220 1921402 cri.go:96] found id: ""
	I1222 00:35:54.596234 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.596241 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:54.596246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:54.596307 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:54.621195 1921402 cri.go:96] found id: ""
	I1222 00:35:54.621210 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.621217 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:54.621223 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:54.621286 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:54.647655 1921402 cri.go:96] found id: ""
	I1222 00:35:54.647669 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.647676 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:54.647681 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:54.647746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:54.673352 1921402 cri.go:96] found id: ""
	I1222 00:35:54.673365 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.673372 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:54.673378 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:54.673434 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:54.702345 1921402 cri.go:96] found id: ""
	I1222 00:35:54.702359 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.702366 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:54.702373 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:54.702385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:54.729725 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:54.729740 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:54.795556 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:54.795573 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:54.810399 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:54.810415 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:54.873205 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:54.873215 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:54.873236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.442564 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:57.452367 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:57.452430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:57.479752 1921402 cri.go:96] found id: ""
	I1222 00:35:57.479765 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.479773 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:57.479780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:57.479843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:57.505262 1921402 cri.go:96] found id: ""
	I1222 00:35:57.505277 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.505286 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:57.505291 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:57.505367 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:57.530750 1921402 cri.go:96] found id: ""
	I1222 00:35:57.530764 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.530771 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:57.530777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:57.530833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:57.557273 1921402 cri.go:96] found id: ""
	I1222 00:35:57.557286 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.557293 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:57.557298 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:57.557375 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:57.589790 1921402 cri.go:96] found id: ""
	I1222 00:35:57.589803 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.589810 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:57.589816 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:57.589872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:57.615560 1921402 cri.go:96] found id: ""
	I1222 00:35:57.615573 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.615580 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:57.615585 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:57.615647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:57.643950 1921402 cri.go:96] found id: ""
	I1222 00:35:57.643964 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.643971 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:57.643979 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:57.643989 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:57.710337 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:57.710357 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:57.725273 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:57.725289 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:57.792176 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:57.792186 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:57.792202 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.869558 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:57.869578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:00.405062 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:00.432605 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:00.432680 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:00.470863 1921402 cri.go:96] found id: ""
	I1222 00:36:00.470879 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.470886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:00.470892 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:00.470958 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:00.501706 1921402 cri.go:96] found id: ""
	I1222 00:36:00.501723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.501736 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:00.501743 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:00.501825 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:00.531718 1921402 cri.go:96] found id: ""
	I1222 00:36:00.531733 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.531741 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:00.531746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:00.531814 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:00.560495 1921402 cri.go:96] found id: ""
	I1222 00:36:00.560511 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.560518 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:00.560525 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:00.560585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:00.588202 1921402 cri.go:96] found id: ""
	I1222 00:36:00.588217 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.588225 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:00.588230 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:00.588293 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:00.616313 1921402 cri.go:96] found id: ""
	I1222 00:36:00.616328 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.616335 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:00.616341 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:00.616402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:00.647784 1921402 cri.go:96] found id: ""
	I1222 00:36:00.647799 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.647806 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:00.647814 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:00.647823 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:00.715096 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:00.715115 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:00.730829 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:00.730845 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:00.797617 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:00.797630 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:00.797642 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:00.866765 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:00.866784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.396597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:03.407146 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:03.407208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:03.433039 1921402 cri.go:96] found id: ""
	I1222 00:36:03.433053 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.433061 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:03.433066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:03.433141 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:03.458706 1921402 cri.go:96] found id: ""
	I1222 00:36:03.458720 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.458727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:03.458732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:03.458789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:03.484510 1921402 cri.go:96] found id: ""
	I1222 00:36:03.484524 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.484531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:03.484537 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:03.484607 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:03.510384 1921402 cri.go:96] found id: ""
	I1222 00:36:03.510399 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.510407 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:03.510413 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:03.510475 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:03.543748 1921402 cri.go:96] found id: ""
	I1222 00:36:03.543763 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.543770 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:03.543775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:03.543833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:03.571136 1921402 cri.go:96] found id: ""
	I1222 00:36:03.571150 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.571157 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:03.571162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:03.571219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:03.600085 1921402 cri.go:96] found id: ""
	I1222 00:36:03.600100 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.600107 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:03.600113 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:03.600124 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:03.664241 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:03.664252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:03.664264 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:03.738233 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:03.738260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.773144 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:03.773160 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:03.841880 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:03.841901 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.357571 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:06.368813 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:06.368874 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:06.400278 1921402 cri.go:96] found id: ""
	I1222 00:36:06.400293 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.400301 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:06.400312 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:06.400373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:06.430787 1921402 cri.go:96] found id: ""
	I1222 00:36:06.430801 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.430809 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:06.430814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:06.430872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:06.455395 1921402 cri.go:96] found id: ""
	I1222 00:36:06.455409 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.455416 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:06.455421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:06.455478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:06.485881 1921402 cri.go:96] found id: ""
	I1222 00:36:06.485895 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.485902 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:06.485908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:06.485981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:06.512824 1921402 cri.go:96] found id: ""
	I1222 00:36:06.512838 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.512845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:06.512850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:06.512918 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:06.540462 1921402 cri.go:96] found id: ""
	I1222 00:36:06.540477 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.540494 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:06.540500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:06.540572 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:06.566709 1921402 cri.go:96] found id: ""
	I1222 00:36:06.566723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.566731 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:06.566738 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:06.566750 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:06.634309 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:06.634331 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.649342 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:06.649359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:06.719108 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:06.719119 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:06.719129 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:06.791062 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:06.791080 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.321217 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:09.331643 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:09.331702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:09.370112 1921402 cri.go:96] found id: ""
	I1222 00:36:09.370139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.370146 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:09.370152 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:09.370216 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:09.400020 1921402 cri.go:96] found id: ""
	I1222 00:36:09.400041 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.400049 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:09.400055 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:09.400126 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:09.436087 1921402 cri.go:96] found id: ""
	I1222 00:36:09.436114 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.436121 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:09.436126 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:09.436194 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:09.463697 1921402 cri.go:96] found id: ""
	I1222 00:36:09.463711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.463718 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:09.463724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:09.463780 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:09.491471 1921402 cri.go:96] found id: ""
	I1222 00:36:09.491485 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.491492 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:09.491498 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:09.491555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:09.516866 1921402 cri.go:96] found id: ""
	I1222 00:36:09.516880 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.516903 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:09.516908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:09.516967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:09.545747 1921402 cri.go:96] found id: ""
	I1222 00:36:09.545761 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.545768 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:09.545777 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:09.545788 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:09.607682 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:09.607692 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:09.607702 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:09.682101 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:09.682120 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.713866 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:09.713881 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:09.781658 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:09.781677 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.297278 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:12.307636 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:12.307707 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:12.337251 1921402 cri.go:96] found id: ""
	I1222 00:36:12.337266 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.337273 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:12.337278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:12.337373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:12.371706 1921402 cri.go:96] found id: ""
	I1222 00:36:12.371719 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.371727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:12.371732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:12.371789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:12.411047 1921402 cri.go:96] found id: ""
	I1222 00:36:12.411060 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.411067 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:12.411072 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:12.411132 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:12.435536 1921402 cri.go:96] found id: ""
	I1222 00:36:12.435550 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.435557 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:12.435562 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:12.435623 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:12.461780 1921402 cri.go:96] found id: ""
	I1222 00:36:12.461794 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.461802 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:12.461808 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:12.461872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:12.487870 1921402 cri.go:96] found id: ""
	I1222 00:36:12.487884 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.487891 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:12.487897 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:12.487954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:12.513933 1921402 cri.go:96] found id: ""
	I1222 00:36:12.513947 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.513954 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:12.513961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:12.513972 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.528919 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:12.528934 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:12.596732 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:12.596742 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:12.596753 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:12.669434 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:12.669452 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:12.702088 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:12.702104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:15.269222 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:15.279437 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:15.279497 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:15.308005 1921402 cri.go:96] found id: ""
	I1222 00:36:15.308019 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.308026 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:15.308034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:15.308104 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:15.335800 1921402 cri.go:96] found id: ""
	I1222 00:36:15.335814 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.335822 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:15.335829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:15.335889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:15.364041 1921402 cri.go:96] found id: ""
	I1222 00:36:15.364055 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.364064 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:15.364070 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:15.364127 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:15.396634 1921402 cri.go:96] found id: ""
	I1222 00:36:15.396649 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.396657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:15.396663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:15.396721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:15.427551 1921402 cri.go:96] found id: ""
	I1222 00:36:15.427565 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.427572 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:15.427578 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:15.427640 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:15.462972 1921402 cri.go:96] found id: ""
	I1222 00:36:15.462987 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.462994 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:15.463000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:15.463057 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:15.488170 1921402 cri.go:96] found id: ""
	I1222 00:36:15.488184 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.488201 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:15.488208 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:15.488219 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:15.503474 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:15.503499 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:15.576848 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:15.576859 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:15.576870 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:15.647239 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:15.647258 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:15.677589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:15.677604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.247711 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:18.259251 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:18.259314 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:18.285041 1921402 cri.go:96] found id: ""
	I1222 00:36:18.285056 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.285063 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:18.285068 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:18.285129 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:18.312429 1921402 cri.go:96] found id: ""
	I1222 00:36:18.312442 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.312449 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:18.312455 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:18.312509 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:18.342758 1921402 cri.go:96] found id: ""
	I1222 00:36:18.342772 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.342779 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:18.342786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:18.342844 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:18.376484 1921402 cri.go:96] found id: ""
	I1222 00:36:18.376497 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.376505 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:18.376510 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:18.376567 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:18.411956 1921402 cri.go:96] found id: ""
	I1222 00:36:18.411970 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.411987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:18.411992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:18.412059 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:18.441521 1921402 cri.go:96] found id: ""
	I1222 00:36:18.441535 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.441542 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:18.441549 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:18.441606 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:18.466320 1921402 cri.go:96] found id: ""
	I1222 00:36:18.466335 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.466342 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:18.466349 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:18.466359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.540350 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:18.540370 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:18.555809 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:18.555826 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:18.620233 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:18.620247 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:18.620260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:18.690556 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:18.690578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.222964 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:21.233019 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:21.233077 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:21.258106 1921402 cri.go:96] found id: ""
	I1222 00:36:21.258120 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.258127 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:21.258133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:21.258190 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:21.282679 1921402 cri.go:96] found id: ""
	I1222 00:36:21.282693 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.282699 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:21.282704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:21.282761 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:21.311750 1921402 cri.go:96] found id: ""
	I1222 00:36:21.311764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.311789 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:21.311794 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:21.311861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:21.340844 1921402 cri.go:96] found id: ""
	I1222 00:36:21.340858 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.340866 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:21.340871 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:21.340940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:21.375873 1921402 cri.go:96] found id: ""
	I1222 00:36:21.375898 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.375905 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:21.375910 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:21.375973 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:21.406752 1921402 cri.go:96] found id: ""
	I1222 00:36:21.406767 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.406775 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:21.406780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:21.406845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:21.430878 1921402 cri.go:96] found id: ""
	I1222 00:36:21.430892 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.430899 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:21.430906 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:21.430925 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:21.499738 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:21.499757 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.532589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:21.532604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:21.600165 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:21.600183 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:21.615391 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:21.615407 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:21.684109 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.185197 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:24.196664 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:24.196744 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:24.223518 1921402 cri.go:96] found id: ""
	I1222 00:36:24.223533 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.223540 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:24.223546 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:24.223609 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:24.250130 1921402 cri.go:96] found id: ""
	I1222 00:36:24.250145 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.250152 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:24.250159 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:24.250218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:24.275065 1921402 cri.go:96] found id: ""
	I1222 00:36:24.275079 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.275086 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:24.275091 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:24.275147 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:24.299603 1921402 cri.go:96] found id: ""
	I1222 00:36:24.299617 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.299625 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:24.299630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:24.299688 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:24.325078 1921402 cri.go:96] found id: ""
	I1222 00:36:24.325092 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.325099 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:24.325116 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:24.325173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:24.357840 1921402 cri.go:96] found id: ""
	I1222 00:36:24.357854 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.357871 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:24.357877 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:24.357933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:24.384301 1921402 cri.go:96] found id: ""
	I1222 00:36:24.384315 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.384345 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:24.384356 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:24.384371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:24.455452 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:24.455475 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:24.470577 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:24.470593 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:24.536053 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.536310 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:24.536325 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:24.615547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:24.615572 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.151479 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:27.161793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:27.161853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:27.187096 1921402 cri.go:96] found id: ""
	I1222 00:36:27.187110 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.187117 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:27.187123 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:27.187179 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:27.212527 1921402 cri.go:96] found id: ""
	I1222 00:36:27.212541 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.212548 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:27.212553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:27.212613 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:27.241441 1921402 cri.go:96] found id: ""
	I1222 00:36:27.241455 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.241462 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:27.241467 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:27.241522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:27.266764 1921402 cri.go:96] found id: ""
	I1222 00:36:27.266778 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.266786 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:27.266791 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:27.266848 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:27.292091 1921402 cri.go:96] found id: ""
	I1222 00:36:27.292105 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.292113 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:27.292118 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:27.292175 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:27.316799 1921402 cri.go:96] found id: ""
	I1222 00:36:27.316813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.316821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:27.316826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:27.316896 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:27.341725 1921402 cri.go:96] found id: ""
	I1222 00:36:27.341739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.341746 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:27.341754 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:27.341764 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:27.363209 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:27.363227 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:27.435344 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:27.435355 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:27.435367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:27.505547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:27.505565 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.534380 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:27.534395 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.100897 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:30.112047 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:30.112110 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:30.143007 1921402 cri.go:96] found id: ""
	I1222 00:36:30.143021 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.143028 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:30.143034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:30.143091 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:30.168174 1921402 cri.go:96] found id: ""
	I1222 00:36:30.168188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.168196 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:30.168202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:30.168270 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:30.198167 1921402 cri.go:96] found id: ""
	I1222 00:36:30.198181 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.198189 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:30.198194 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:30.198254 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:30.228238 1921402 cri.go:96] found id: ""
	I1222 00:36:30.228252 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.228259 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:30.228265 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:30.228326 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:30.252893 1921402 cri.go:96] found id: ""
	I1222 00:36:30.252907 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.252914 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:30.252920 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:30.252981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:30.282045 1921402 cri.go:96] found id: ""
	I1222 00:36:30.282067 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.282074 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:30.282080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:30.282146 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:30.308008 1921402 cri.go:96] found id: ""
	I1222 00:36:30.308020 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.308027 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:30.308036 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:30.308047 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:30.380173 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:30.380183 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:30.380193 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:30.455922 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:30.455942 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:30.489214 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:30.489236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.560024 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:30.560042 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.076015 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:33.086212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:33.086271 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:33.111816 1921402 cri.go:96] found id: ""
	I1222 00:36:33.111830 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.111837 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:33.111842 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:33.111899 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:33.139689 1921402 cri.go:96] found id: ""
	I1222 00:36:33.139702 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.139709 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:33.139715 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:33.139771 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:33.164725 1921402 cri.go:96] found id: ""
	I1222 00:36:33.164739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.164746 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:33.164752 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:33.164809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:33.191640 1921402 cri.go:96] found id: ""
	I1222 00:36:33.191654 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.191661 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:33.191666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:33.191721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:33.217604 1921402 cri.go:96] found id: ""
	I1222 00:36:33.217618 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.217625 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:33.217631 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:33.217690 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:33.243174 1921402 cri.go:96] found id: ""
	I1222 00:36:33.243188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.243195 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:33.243201 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:33.243256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:33.271929 1921402 cri.go:96] found id: ""
	I1222 00:36:33.271943 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.271950 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:33.271958 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:33.271968 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:33.300147 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:33.300163 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:33.366779 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:33.366797 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.382105 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:33.382122 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:33.445242 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:33.445252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:33.445261 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:36.014121 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:36.025137 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:36.025200 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:36.051061 1921402 cri.go:96] found id: ""
	I1222 00:36:36.051077 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.051085 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:36.051090 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:36.051154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:36.082445 1921402 cri.go:96] found id: ""
	I1222 00:36:36.082458 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.082466 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:36.082471 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:36.082543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:36.108654 1921402 cri.go:96] found id: ""
	I1222 00:36:36.108668 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.108676 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:36.108682 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:36.108746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:36.133581 1921402 cri.go:96] found id: ""
	I1222 00:36:36.133595 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.133602 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:36.133608 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:36.133672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:36.158297 1921402 cri.go:96] found id: ""
	I1222 00:36:36.158311 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.158318 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:36.158324 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:36.158397 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:36.188437 1921402 cri.go:96] found id: ""
	I1222 00:36:36.188451 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.188458 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:36.188464 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:36.188521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:36.213711 1921402 cri.go:96] found id: ""
	I1222 00:36:36.213725 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.213732 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:36.213740 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:36.213752 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:36.241158 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:36.241174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:36.309133 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:36.309152 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:36.323930 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:36.323945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:36.412954 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:36.412965 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:36.412977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:38.983065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:38.993162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:38.993219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:39.019770 1921402 cri.go:96] found id: ""
	I1222 00:36:39.019784 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.019791 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:39.019797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:39.019854 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:39.044991 1921402 cri.go:96] found id: ""
	I1222 00:36:39.045005 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.045012 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:39.045017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:39.045074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:39.070008 1921402 cri.go:96] found id: ""
	I1222 00:36:39.070022 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.070029 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:39.070034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:39.070107 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:39.094755 1921402 cri.go:96] found id: ""
	I1222 00:36:39.094768 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.094776 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:39.094781 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:39.094837 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:39.120054 1921402 cri.go:96] found id: ""
	I1222 00:36:39.120068 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.120075 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:39.120080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:39.120136 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:39.144698 1921402 cri.go:96] found id: ""
	I1222 00:36:39.144711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.144719 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:39.144724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:39.144777 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:39.169125 1921402 cri.go:96] found id: ""
	I1222 00:36:39.169139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.169146 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:39.169153 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:39.169162 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:39.235404 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:39.235423 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:39.250312 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:39.250330 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:39.310727 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:39.310737 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:39.310749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:39.385727 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:39.385745 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:41.917457 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:41.927340 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:41.927400 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:41.952148 1921402 cri.go:96] found id: ""
	I1222 00:36:41.952162 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.952169 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:41.952175 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:41.952231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:41.977527 1921402 cri.go:96] found id: ""
	I1222 00:36:41.977542 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.977549 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:41.977555 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:41.977625 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:42.005444 1921402 cri.go:96] found id: ""
	I1222 00:36:42.005462 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.005470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:42.005476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:42.005551 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:42.041797 1921402 cri.go:96] found id: ""
	I1222 00:36:42.041813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.041820 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:42.041826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:42.041888 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:42.069781 1921402 cri.go:96] found id: ""
	I1222 00:36:42.069797 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.069806 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:42.069812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:42.069880 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:42.104984 1921402 cri.go:96] found id: ""
	I1222 00:36:42.105000 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.105009 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:42.105016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:42.105125 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:42.150604 1921402 cri.go:96] found id: ""
	I1222 00:36:42.150620 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.150629 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:42.150637 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:42.150649 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:42.223375 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:42.223400 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:42.242775 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:42.242802 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:42.318351 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:42.318361 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:42.318371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:42.393293 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:42.393311 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:44.928982 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:44.938908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:44.938968 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:44.963638 1921402 cri.go:96] found id: ""
	I1222 00:36:44.963652 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.963659 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:44.963665 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:44.963723 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:44.988816 1921402 cri.go:96] found id: ""
	I1222 00:36:44.988831 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.988838 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:44.988843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:44.988908 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:45.056183 1921402 cri.go:96] found id: ""
	I1222 00:36:45.056202 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.056212 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:45.056218 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:45.056311 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:45.094388 1921402 cri.go:96] found id: ""
	I1222 00:36:45.094407 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.094416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:45.094422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:45.094503 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:45.141522 1921402 cri.go:96] found id: ""
	I1222 00:36:45.141539 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.141547 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:45.141554 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:45.141632 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:45.184485 1921402 cri.go:96] found id: ""
	I1222 00:36:45.184517 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.184539 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:45.184545 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:45.184691 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:45.244764 1921402 cri.go:96] found id: ""
	I1222 00:36:45.244780 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.244789 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:45.244799 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:45.244810 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:45.323564 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:45.323584 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:45.341732 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:45.341749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:45.426921 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:45.426931 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:45.426941 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:45.496872 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:45.496897 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:48.027828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:48.038486 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:48.038549 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:48.063156 1921402 cri.go:96] found id: ""
	I1222 00:36:48.063170 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.063177 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:48.063183 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:48.063242 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:48.092751 1921402 cri.go:96] found id: ""
	I1222 00:36:48.092764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.092771 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:48.092776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:48.092833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:48.117758 1921402 cri.go:96] found id: ""
	I1222 00:36:48.117771 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.117778 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:48.117784 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:48.117845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:48.156671 1921402 cri.go:96] found id: ""
	I1222 00:36:48.156685 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.156692 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:48.156697 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:48.156759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:48.181935 1921402 cri.go:96] found id: ""
	I1222 00:36:48.181948 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.181961 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:48.181966 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:48.182025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:48.206454 1921402 cri.go:96] found id: ""
	I1222 00:36:48.206468 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.206476 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:48.206481 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:48.206537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:48.230655 1921402 cri.go:96] found id: ""
	I1222 00:36:48.230669 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.230676 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:48.230684 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:48.230694 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:48.297753 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:48.297772 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:48.312560 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:48.312575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:48.405529 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:48.405539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:48.405550 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:48.479216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:48.479235 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.006751 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:51.017983 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:51.018043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:51.045081 1921402 cri.go:96] found id: ""
	I1222 00:36:51.045095 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.045102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:51.045133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:51.045231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:51.072909 1921402 cri.go:96] found id: ""
	I1222 00:36:51.072923 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.072931 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:51.072936 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:51.073002 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:51.101558 1921402 cri.go:96] found id: ""
	I1222 00:36:51.101582 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.101590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:51.101596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:51.101667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:51.128728 1921402 cri.go:96] found id: ""
	I1222 00:36:51.128762 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.128771 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:51.128777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:51.128843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:51.155107 1921402 cri.go:96] found id: ""
	I1222 00:36:51.155123 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.155129 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:51.155135 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:51.155196 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:51.183281 1921402 cri.go:96] found id: ""
	I1222 00:36:51.183295 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.183302 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:51.183308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:51.183365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:51.209022 1921402 cri.go:96] found id: ""
	I1222 00:36:51.209050 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.209057 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:51.209078 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:51.209088 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:51.272611 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:51.272624 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:51.272634 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:51.342544 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:51.342563 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.373712 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:51.373727 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:51.452392 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:51.452419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:53.968915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:53.979167 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:53.979236 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:54.009488 1921402 cri.go:96] found id: ""
	I1222 00:36:54.009513 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.009521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:54.009527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:54.009592 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:54.036904 1921402 cri.go:96] found id: ""
	I1222 00:36:54.036919 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.036927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:54.036932 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:54.036991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:54.065561 1921402 cri.go:96] found id: ""
	I1222 00:36:54.065576 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.065583 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:54.065588 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:54.065651 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:54.092446 1921402 cri.go:96] found id: ""
	I1222 00:36:54.092472 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.092479 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:54.092485 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:54.092554 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:54.124282 1921402 cri.go:96] found id: ""
	I1222 00:36:54.124296 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.124303 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:54.124308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:54.124376 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:54.149069 1921402 cri.go:96] found id: ""
	I1222 00:36:54.149082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.149089 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:54.149095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:54.149150 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:54.173952 1921402 cri.go:96] found id: ""
	I1222 00:36:54.173966 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.173973 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:54.173985 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:54.173996 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:54.243687 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:54.243707 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:54.274613 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:54.274629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:54.347248 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:54.347266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:54.368410 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:54.368429 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:54.442889 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:56.943148 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:56.953207 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:56.953265 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:56.980068 1921402 cri.go:96] found id: ""
	I1222 00:36:56.980082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:56.980089 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:56.980095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:56.980154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:57.009964 1921402 cri.go:96] found id: ""
	I1222 00:36:57.009979 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.009987 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:57.009992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:57.010056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:57.040772 1921402 cri.go:96] found id: ""
	I1222 00:36:57.040788 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.040796 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:57.040801 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:57.040861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:57.066436 1921402 cri.go:96] found id: ""
	I1222 00:36:57.066449 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.066456 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:57.066461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:57.066518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:57.090966 1921402 cri.go:96] found id: ""
	I1222 00:36:57.090980 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.090987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:57.090992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:57.091052 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:57.120134 1921402 cri.go:96] found id: ""
	I1222 00:36:57.120149 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.120156 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:57.120162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:57.120218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:57.145014 1921402 cri.go:96] found id: ""
	I1222 00:36:57.145027 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.145034 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:57.145041 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:57.145052 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:57.214243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:57.214263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:57.248369 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:57.248387 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:57.316316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:57.316337 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:57.331296 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:57.331313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:57.423941 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:59.924191 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:59.934553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:59.934611 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:59.963952 1921402 cri.go:96] found id: ""
	I1222 00:36:59.963967 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.963974 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:59.963980 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:59.964039 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:59.989241 1921402 cri.go:96] found id: ""
	I1222 00:36:59.989254 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.989261 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:59.989267 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:59.989346 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:00.041618 1921402 cri.go:96] found id: ""
	I1222 00:37:00.041645 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.041653 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:00.041659 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:00.041743 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:00.156035 1921402 cri.go:96] found id: ""
	I1222 00:37:00.156051 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.156058 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:00.156066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:00.156135 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:00.209590 1921402 cri.go:96] found id: ""
	I1222 00:37:00.209625 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.209642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:00.209658 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:00.209760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:00.265952 1921402 cri.go:96] found id: ""
	I1222 00:37:00.265968 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.265976 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:00.265982 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:00.266053 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:00.306640 1921402 cri.go:96] found id: ""
	I1222 00:37:00.306669 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.306678 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:00.306692 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:00.306705 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:00.391041 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:00.391062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:00.408447 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:00.408463 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:00.477856 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:00.477866 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:00.477876 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:00.550139 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:00.550158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:03.081702 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:03.091652 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:03.091709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:03.116220 1921402 cri.go:96] found id: ""
	I1222 00:37:03.116234 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.116241 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:03.116247 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:03.116303 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:03.141299 1921402 cri.go:96] found id: ""
	I1222 00:37:03.141313 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.141340 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:03.141345 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:03.141402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:03.166615 1921402 cri.go:96] found id: ""
	I1222 00:37:03.166630 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.166637 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:03.166642 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:03.166706 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:03.196327 1921402 cri.go:96] found id: ""
	I1222 00:37:03.196341 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.196358 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:03.196366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:03.196432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:03.221691 1921402 cri.go:96] found id: ""
	I1222 00:37:03.221705 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.221712 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:03.221717 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:03.221774 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:03.247455 1921402 cri.go:96] found id: ""
	I1222 00:37:03.247470 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.247477 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:03.247482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:03.247540 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:03.277376 1921402 cri.go:96] found id: ""
	I1222 00:37:03.277398 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.277406 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:03.277414 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:03.277424 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:03.345137 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:03.345158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:03.362155 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:03.362174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:03.438149 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:03.438160 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:03.438172 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:03.508243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:03.508262 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.037621 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:06.048285 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:06.048348 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:06.076896 1921402 cri.go:96] found id: ""
	I1222 00:37:06.076911 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.076918 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:06.076924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:06.076985 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:06.102650 1921402 cri.go:96] found id: ""
	I1222 00:37:06.102665 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.102671 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:06.102677 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:06.102732 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:06.127851 1921402 cri.go:96] found id: ""
	I1222 00:37:06.127865 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.127872 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:06.127878 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:06.127934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:06.154598 1921402 cri.go:96] found id: ""
	I1222 00:37:06.154612 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.154619 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:06.154625 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:06.154683 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:06.181128 1921402 cri.go:96] found id: ""
	I1222 00:37:06.181142 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.181149 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:06.181155 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:06.181215 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:06.207749 1921402 cri.go:96] found id: ""
	I1222 00:37:06.207762 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.207770 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:06.207776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:06.207832 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:06.240273 1921402 cri.go:96] found id: ""
	I1222 00:37:06.240287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.240295 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:06.240302 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:06.240313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:06.310353 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:06.310372 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.337909 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:06.337928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:06.413399 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:06.413419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:06.428074 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:06.428091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:06.491546 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:08.991781 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:09.003482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:09.003561 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:09.030801 1921402 cri.go:96] found id: ""
	I1222 00:37:09.030816 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.030823 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:09.030829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:09.030884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:09.063705 1921402 cri.go:96] found id: ""
	I1222 00:37:09.063728 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.063740 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:09.063746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:09.063819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:09.094354 1921402 cri.go:96] found id: ""
	I1222 00:37:09.094370 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.094378 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:09.094384 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:09.094441 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:09.120692 1921402 cri.go:96] found id: ""
	I1222 00:37:09.120706 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.120714 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:09.120719 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:09.120776 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:09.146623 1921402 cri.go:96] found id: ""
	I1222 00:37:09.146642 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.146649 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:09.146654 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:09.146711 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:09.176276 1921402 cri.go:96] found id: ""
	I1222 00:37:09.176290 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.176297 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:09.176302 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:09.176359 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:09.205092 1921402 cri.go:96] found id: ""
	I1222 00:37:09.205106 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.205113 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:09.205120 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:09.205130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:09.271826 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:09.271851 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:09.287713 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:09.287729 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:09.372451 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:09.372461 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:09.372474 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:09.456787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:09.456808 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:11.988321 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:11.998626 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:11.998682 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:12.034498 1921402 cri.go:96] found id: ""
	I1222 00:37:12.034513 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.034521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:12.034527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:12.034588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:12.065008 1921402 cri.go:96] found id: ""
	I1222 00:37:12.065023 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.065033 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:12.065038 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:12.065098 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:12.092442 1921402 cri.go:96] found id: ""
	I1222 00:37:12.092456 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.092463 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:12.092469 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:12.092529 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:12.118395 1921402 cri.go:96] found id: ""
	I1222 00:37:12.118408 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.118416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:12.118421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:12.118476 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:12.143621 1921402 cri.go:96] found id: ""
	I1222 00:37:12.143634 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.143642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:12.143647 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:12.143702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:12.172923 1921402 cri.go:96] found id: ""
	I1222 00:37:12.172937 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.172944 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:12.172949 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:12.173004 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:12.199794 1921402 cri.go:96] found id: ""
	I1222 00:37:12.199808 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.199815 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:12.199822 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:12.199833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:12.263467 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:12.263477 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:12.263488 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:12.334334 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:12.334358 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:12.374086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:12.374104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:12.447671 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:12.447690 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:14.962865 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:14.972963 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:14.973024 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:14.998158 1921402 cri.go:96] found id: ""
	I1222 00:37:14.998172 1921402 logs.go:282] 0 containers: []
	W1222 00:37:14.998179 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:14.998185 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:14.998243 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:15.029851 1921402 cri.go:96] found id: ""
	I1222 00:37:15.029867 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.029874 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:15.029882 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:15.029948 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:15.057647 1921402 cri.go:96] found id: ""
	I1222 00:37:15.057662 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.057668 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:15.057674 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:15.057742 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:15.084111 1921402 cri.go:96] found id: ""
	I1222 00:37:15.084126 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.084134 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:15.084140 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:15.084205 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:15.114963 1921402 cri.go:96] found id: ""
	I1222 00:37:15.114978 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.114984 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:15.114990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:15.115050 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:15.140108 1921402 cri.go:96] found id: ""
	I1222 00:37:15.140122 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.140129 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:15.140134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:15.140192 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:15.169273 1921402 cri.go:96] found id: ""
	I1222 00:37:15.169287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.169294 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:15.169302 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:15.169312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:15.235722 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:15.235741 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:15.250646 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:15.250662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:15.315252 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:15.315262 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:15.315272 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:15.390764 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:15.390784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:17.922554 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:17.932679 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:17.932739 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:17.957865 1921402 cri.go:96] found id: ""
	I1222 00:37:17.957879 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.957886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:17.957891 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:17.957952 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:17.987323 1921402 cri.go:96] found id: ""
	I1222 00:37:17.987337 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.987344 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:17.987349 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:17.987410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:18.017446 1921402 cri.go:96] found id: ""
	I1222 00:37:18.017461 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.017469 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:18.017474 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:18.017538 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:18.043105 1921402 cri.go:96] found id: ""
	I1222 00:37:18.043119 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.043126 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:18.043131 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:18.043188 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:18.069214 1921402 cri.go:96] found id: ""
	I1222 00:37:18.069229 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.069236 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:18.069241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:18.069304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:18.094585 1921402 cri.go:96] found id: ""
	I1222 00:37:18.094598 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.094605 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:18.094610 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:18.094670 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:18.124271 1921402 cri.go:96] found id: ""
	I1222 00:37:18.124292 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.124300 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:18.124307 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:18.124318 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:18.191628 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:18.191645 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:18.206373 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:18.206389 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:18.271908 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:18.271918 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:18.271931 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:18.347020 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:18.347041 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:20.886577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:20.896630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:20.896687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:20.924710 1921402 cri.go:96] found id: ""
	I1222 00:37:20.924724 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.924731 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:20.924737 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:20.924797 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:20.949929 1921402 cri.go:96] found id: ""
	I1222 00:37:20.949944 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.949951 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:20.949956 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:20.950015 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:20.975241 1921402 cri.go:96] found id: ""
	I1222 00:37:20.975254 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.975262 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:20.975273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:20.975332 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:21.007134 1921402 cri.go:96] found id: ""
	I1222 00:37:21.007149 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.007157 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:21.007162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:21.007226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:21.036823 1921402 cri.go:96] found id: ""
	I1222 00:37:21.036837 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.036845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:21.036850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:21.036922 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:21.066186 1921402 cri.go:96] found id: ""
	I1222 00:37:21.066201 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.066208 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:21.066214 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:21.066273 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:21.092594 1921402 cri.go:96] found id: ""
	I1222 00:37:21.092609 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.092616 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:21.092624 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:21.092658 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:21.155743 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:21.155753 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:21.155763 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:21.230114 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:21.230133 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:21.258046 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:21.258062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:21.330592 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:21.330619 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:23.847823 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:23.857930 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:23.857993 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:23.883272 1921402 cri.go:96] found id: ""
	I1222 00:37:23.883286 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.883294 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:23.883300 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:23.883355 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:23.907502 1921402 cri.go:96] found id: ""
	I1222 00:37:23.907515 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.907523 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:23.907528 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:23.907588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:23.932684 1921402 cri.go:96] found id: ""
	I1222 00:37:23.932698 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.932705 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:23.932711 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:23.932769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:23.957636 1921402 cri.go:96] found id: ""
	I1222 00:37:23.957650 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.957657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:23.957662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:23.957719 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:23.982476 1921402 cri.go:96] found id: ""
	I1222 00:37:23.982490 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.982497 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:23.982502 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:23.982560 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:24.014243 1921402 cri.go:96] found id: ""
	I1222 00:37:24.014258 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.014265 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:24.014278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:24.014341 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:24.041726 1921402 cri.go:96] found id: ""
	I1222 00:37:24.041740 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.041748 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:24.041756 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:24.041766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:24.110293 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:24.110312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:24.126712 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:24.126736 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:24.190552 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:24.190564 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:24.190575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:24.262140 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:24.262165 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:26.797351 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:26.808692 1921402 kubeadm.go:602] duration metric: took 4m2.99406476s to restartPrimaryControlPlane
	W1222 00:37:26.808754 1921402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:37:26.808829 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:37:27.218865 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:37:27.231651 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:37:27.239390 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:37:27.239447 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:37:27.246990 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:37:27.247000 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:37:27.247052 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:37:27.254602 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:37:27.254657 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:37:27.261917 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:37:27.269756 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:37:27.269819 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:37:27.277175 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.285102 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:37:27.285164 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.292582 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:37:27.300147 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:37:27.300201 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:37:27.307155 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:37:27.346116 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:37:27.346165 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:37:27.414020 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:37:27.414089 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:37:27.414123 1921402 kubeadm.go:319] OS: Linux
	I1222 00:37:27.414168 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:37:27.414215 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:37:27.414261 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:37:27.414307 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:37:27.414354 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:37:27.414404 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:37:27.414448 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:37:27.414494 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:37:27.414538 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:37:27.481436 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:37:27.481554 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:37:27.481674 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:37:27.489751 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:37:27.493685 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:37:27.493793 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:37:27.493881 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:37:27.493967 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:37:27.494032 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:37:27.494108 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:37:27.494164 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:37:27.494230 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:37:27.494294 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:37:27.494372 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:37:27.494448 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:37:27.494487 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:37:27.494545 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:37:27.957235 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:37:28.242322 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:37:28.521612 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:37:28.976211 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:37:29.260315 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:37:29.260957 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:37:29.263905 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:37:29.266956 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:37:29.267051 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:37:29.267139 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:37:29.268296 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:37:29.283008 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:37:29.283332 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:37:29.290939 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:37:29.291397 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:37:29.291445 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:37:29.423121 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:37:29.423418 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:41:29.424266 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00111221s
	I1222 00:41:29.424291 1921402 kubeadm.go:319] 
	I1222 00:41:29.424347 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:41:29.424379 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:41:29.424483 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:41:29.424487 1921402 kubeadm.go:319] 
	I1222 00:41:29.424591 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:41:29.424622 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:41:29.424652 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:41:29.424655 1921402 kubeadm.go:319] 
	I1222 00:41:29.429048 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:41:29.429508 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:41:29.429617 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:41:29.429851 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:41:29.429855 1921402 kubeadm.go:319] 
	I1222 00:41:29.429925 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:41:29.430059 1921402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00111221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:41:29.430143 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:41:29.837125 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:41:29.849742 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:41:29.849801 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:41:29.857510 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:41:29.857520 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:41:29.857569 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:41:29.865043 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:41:29.865100 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:41:29.872214 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:41:29.879983 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:41:29.880037 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:41:29.887891 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.895656 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:41:29.895711 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.902944 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:41:29.910473 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:41:29.910531 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:41:29.917929 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:41:29.958087 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:41:29.958157 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:41:30.050415 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:41:30.050500 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:41:30.050542 1921402 kubeadm.go:319] OS: Linux
	I1222 00:41:30.050625 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:41:30.050679 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:41:30.050743 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:41:30.050790 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:41:30.050861 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:41:30.050918 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:41:30.050963 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:41:30.051010 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:41:30.051055 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:41:30.118501 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:41:30.118617 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:41:30.118712 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:41:30.125936 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:41:30.127797 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:41:30.127893 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:41:30.129701 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:41:30.129794 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:41:30.129856 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:41:30.129967 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:41:30.130025 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:41:30.130096 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:41:30.130183 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:41:30.130272 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:41:30.130344 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:41:30.130391 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:41:30.130455 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:41:30.344018 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:41:30.899259 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:41:31.157696 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:41:31.268170 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:41:31.664677 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:41:31.665563 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:41:31.668250 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:41:31.669734 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:41:31.669834 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:41:31.670177 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:41:31.671652 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:41:31.687500 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:41:31.687601 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:41:31.694901 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:41:31.695472 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:41:31.695535 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:41:31.830593 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:41:31.830722 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:45:31.830877 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001320631s
	I1222 00:45:31.831182 1921402 kubeadm.go:319] 
	I1222 00:45:31.831295 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:45:31.831352 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:45:31.831682 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:45:31.831690 1921402 kubeadm.go:319] 
	I1222 00:45:31.831881 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:45:31.831936 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:45:31.831990 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:45:31.831994 1921402 kubeadm.go:319] 
	I1222 00:45:31.836486 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:45:31.837376 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:45:31.837502 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:45:31.837782 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:45:31.837816 1921402 kubeadm.go:319] 
	I1222 00:45:31.837891 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:45:31.837949 1921402 kubeadm.go:403] duration metric: took 12m8.05957574s to StartCluster
	I1222 00:45:31.837990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:45:31.838056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:45:31.866824 1921402 cri.go:96] found id: ""
	I1222 00:45:31.866837 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.866844 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:45:31.866850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:45:31.866906 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:45:31.892772 1921402 cri.go:96] found id: ""
	I1222 00:45:31.892797 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.892805 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:45:31.892812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:45:31.892881 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:45:31.919961 1921402 cri.go:96] found id: ""
	I1222 00:45:31.919975 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.919982 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:45:31.919987 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:45:31.920043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:45:31.945683 1921402 cri.go:96] found id: ""
	I1222 00:45:31.945697 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.945704 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:45:31.945710 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:45:31.945770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:45:31.971052 1921402 cri.go:96] found id: ""
	I1222 00:45:31.971066 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.971074 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:45:31.971080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:45:31.971139 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:45:31.997143 1921402 cri.go:96] found id: ""
	I1222 00:45:31.997158 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.997165 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:45:31.997170 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:45:31.997226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:45:32.024698 1921402 cri.go:96] found id: ""
	I1222 00:45:32.024713 1921402 logs.go:282] 0 containers: []
	W1222 00:45:32.024721 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:45:32.024729 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:45:32.024739 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:45:32.092110 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:45:32.092130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:45:32.109516 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:45:32.109532 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:45:32.182932 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:45:32.182942 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:45:32.182952 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:45:32.255366 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:45:32.255386 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:45:32.288418 1921402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:45:32.288449 1921402 out.go:285] * 
	W1222 00:45:32.288511 1921402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.288527 1921402 out.go:285] * 
	W1222 00:45:32.290657 1921402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:45:32.295098 1921402 out.go:203] 
	W1222 00:45:32.297043 1921402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.297089 1921402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:45:32.297111 1921402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:45:32.299905 1921402 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122951619Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=23242d7e-419c-42f0-8bef-ee98feafd2e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123417804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5531d018-c2c9-45be-a1ce-0187a1f831ac name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123836761Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=10e6d9d2-6073-425b-bed1-1e668d53ee0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124244175Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c2632b93-d501-48e1-9830-ac88cf5004de name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124649086Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8dce6ea5-b89a-4098-8ce9-aea233a27c3e name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964031496Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964252675Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964296825Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964375806Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005620606Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005780347Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005818057Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040267534Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040399911Z" level=info msg="Image localhost/kicbase/echo-server:functional-837733 not found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040434683Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-837733 found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852687205Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.85283559Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852880003Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852954134Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886292818Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886432169Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886470281Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.911030791Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=369fa22a-71b8-4625-900e-18b79a9137e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.962795293Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.962862203Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=9c28ba1f-7abf-4263-860e-8a0ef6162d0e name=/runtime.v1.ImageService/ListImages
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:47:49.150104   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:47:49.150696   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:47:49.152109   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:47:49.152636   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:47:49.154213   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:47:49 up 1 day,  7:30,  0 user,  load average: 0.16, 0.30, 0.48
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:47:46 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:47:47 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 22 00:47:47 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:47 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:47 functional-837733 kubelet[23363]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:47 functional-837733 kubelet[23363]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:47 functional-837733 kubelet[23363]: E1222 00:47:47.142446   23363 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:47:47 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:47:47 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:47:47 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 501.
	Dec 22 00:47:47 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:47 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:47 functional-837733 kubelet[23369]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:47 functional-837733 kubelet[23369]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:47 functional-837733 kubelet[23369]: E1222 00:47:47.903457   23369 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:47:47 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:47:47 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:47:48 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 502.
	Dec 22 00:47:48 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:48 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:47:48 functional-837733 kubelet[23390]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:48 functional-837733 kubelet[23390]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:47:48 functional-837733 kubelet[23390]: E1222 00:47:48.650680   23390 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:47:48 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:47:48 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (365.137551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:45:58.093474 1881219 retry.go:84] will retry after 3.5s: Temporary Error: Get "http://10.109.136.175": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:46:11.617241 1881219 retry.go:84] will retry after 3.8s: Temporary Error: Get "http://10.109.136.175": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:46:25.417061 1881219 retry.go:84] will retry after 4.2s: Temporary Error: Get "http://10.109.136.175": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1222 00:46:25.759825 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:46:39.639251 1881219 retry.go:84] will retry after 12.9s: Temporary Error: Get "http://10.109.136.175": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (duplicate log for 41.5s)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:47:02.568266 1881219 retry.go:84] will retry after 8.1s: Temporary Error: Get "http://10.109.136.175": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (duplicate log for 1m4.5s)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:47:20.677983 1881219 retry.go:74] will retry after 16.7s: stuck on same error as above for 1m22.6s...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1222 00:47:43.835798 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1222 00:49:28.819210 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (301.530246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (321.802646ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                    ARGS                                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-837733 ssh findmnt -T /mount-9p | grep 9p                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh            │ functional-837733 ssh findmnt -T /mount-9p | grep 9p                                                                                        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh            │ functional-837733 ssh -- ls -la /mount-9p                                                                                                   │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh            │ functional-837733 ssh sudo umount -f /mount-9p                                                                                              │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ mount          │ -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount1 --alsologtostderr -v=1        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh            │ functional-837733 ssh findmnt -T /mount1                                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ mount          │ -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount2 --alsologtostderr -v=1        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ mount          │ -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount3 --alsologtostderr -v=1        │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ ssh            │ functional-837733 ssh findmnt -T /mount2                                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ ssh            │ functional-837733 ssh findmnt -T /mount3                                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │ 22 Dec 25 00:47 UTC │
	│ mount          │ -p functional-837733 --kill=true                                                                                                            │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ start          │ -p functional-837733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ start          │ -p functional-837733 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:47 UTC │                     │
	│ start          │ -p functional-837733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-837733 --alsologtostderr -v=1                                                                              │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │                     │
	│ update-context │ functional-837733 update-context --alsologtostderr -v=2                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	│ update-context │ functional-837733 update-context --alsologtostderr -v=2                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	│ update-context │ functional-837733 update-context --alsologtostderr -v=2                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	│ image          │ functional-837733 image ls --format short --alsologtostderr                                                                                 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	│ image          │ functional-837733 image ls --format yaml --alsologtostderr                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	│ ssh            │ functional-837733 ssh pgrep buildkitd                                                                                                       │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │                     │
	│ image          │ functional-837733 image build -t localhost/my-image:functional-837733 testdata/build --alsologtostderr                                      │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	│ image          │ functional-837733 image ls                                                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	│ image          │ functional-837733 image ls --format json --alsologtostderr                                                                                  │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	│ image          │ functional-837733 image ls --format table --alsologtostderr                                                                                 │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:48 UTC │ 22 Dec 25 00:48 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:48:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:48:00.432356 1940285 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:48:00.432506 1940285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:48:00.432512 1940285 out.go:374] Setting ErrFile to fd 2...
	I1222 00:48:00.432518 1940285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:48:00.433011 1940285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:48:00.433541 1940285 out.go:368] Setting JSON to false
	I1222 00:48:00.434510 1940285 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":113430,"bootTime":1766251051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:48:00.434603 1940285 start.go:143] virtualization:  
	I1222 00:48:00.438306 1940285 out.go:179] * [functional-837733] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1222 00:48:00.441464 1940285 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:48:00.441573 1940285 notify.go:221] Checking for updates...
	I1222 00:48:00.447466 1940285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:48:00.450435 1940285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:48:00.453538 1940285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:48:00.456664 1940285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:48:00.460323 1940285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:48:00.463901 1940285 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:48:00.464651 1940285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:48:00.501637 1940285 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:48:00.501792 1940285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:48:00.569175 1940285 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:48:00.558846475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:48:00.569280 1940285 docker.go:319] overlay module found
	I1222 00:48:00.572557 1940285 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1222 00:48:00.575496 1940285 start.go:309] selected driver: docker
	I1222 00:48:00.575517 1940285 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:48:00.575624 1940285 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:48:00.579072 1940285 out.go:203] 
	W1222 00:48:00.581978 1940285 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1222 00:48:00.584851 1940285 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122951619Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=23242d7e-419c-42f0-8bef-ee98feafd2e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123417804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5531d018-c2c9-45be-a1ce-0187a1f831ac name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123836761Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=10e6d9d2-6073-425b-bed1-1e668d53ee0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124244175Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c2632b93-d501-48e1-9830-ac88cf5004de name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124649086Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8dce6ea5-b89a-4098-8ce9-aea233a27c3e name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964031496Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964252675Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964296825Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964375806Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005620606Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005780347Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005818057Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040267534Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040399911Z" level=info msg="Image localhost/kicbase/echo-server:functional-837733 not found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040434683Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-837733 found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852687205Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.85283559Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852880003Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.852954134Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=190f3be0-8a95-4387-857e-55c8ccef5304 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886292818Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886432169Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.886470281Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=e72f8855-34fd-4154-82a7-06ae569a15b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.911030791Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=369fa22a-71b8-4625-900e-18b79a9137e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.962795293Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:44 functional-837733 crio[9971]: time="2025-12-22T00:45:44.962862203Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=9c28ba1f-7abf-4263-860e-8a0ef6162d0e name=/runtime.v1.ImageService/ListImages
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:49:51.537770   25469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:49:51.538343   25469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:49:51.539786   25469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:49:51.540172   25469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:49:51.541662   25469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:49:51 up 1 day,  7:32,  0 user,  load average: 0.31, 0.29, 0.46
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:49:49 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:49:49 functional-837733 kubelet[25341]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:49:49 functional-837733 kubelet[25341]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:49:49 functional-837733 kubelet[25341]: E1222 00:49:49.390822   25341 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:49:49 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:49:49 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:49:50 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 664.
	Dec 22 00:49:50 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:49:50 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:49:50 functional-837733 kubelet[25346]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:49:50 functional-837733 kubelet[25346]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:49:50 functional-837733 kubelet[25346]: E1222 00:49:50.137157   25346 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:49:50 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:49:50 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:49:50 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 665.
	Dec 22 00:49:50 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:49:50 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:49:50 functional-837733 kubelet[25377]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:49:50 functional-837733 kubelet[25377]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:49:50 functional-837733 kubelet[25377]: E1222 00:49:50.904292   25377 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:49:50 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:49:50 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:49:51 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 666.
	Dec 22 00:49:51 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:49:51 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (319.076992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (3.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-837733 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-837733 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (94.489997ms)

                                                
                                                
** stderr ** 
	E1222 00:45:40.672230 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.674916 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.676313 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.677704 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.679092 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-837733 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1222 00:45:40.672230 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.674916 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.676313 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.677704 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.679092 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1222 00:45:40.672230 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.674916 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.676313 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.677704 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.679092 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1222 00:45:40.672230 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.674916 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.676313 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.677704 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.679092 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1222 00:45:40.672230 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.674916 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.676313 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.677704 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.679092 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1222 00:45:40.672230 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.674916 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.676313 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.677704 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:45:40.679092 1934619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-837733
helpers_test.go:244: (dbg) docker inspect functional-837733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	        "Created": "2025-12-22T00:18:39.648880705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1910025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:18:39.710239294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/hosts",
	        "LogPath": "/var/lib/docker/containers/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371/0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371-json.log",
	        "Name": "/functional-837733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-837733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-837733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f3f5ed9153825565e6ae65b9ae094e398b205148f05dd9fbbe7bdee6f4b2371",
	                "LowerDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe03de76aaeec98a89a9f455cccac92ef05140325ae42ea6aa2d6dd2707162d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-837733",
	                "Source": "/var/lib/docker/volumes/functional-837733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-837733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-837733",
	                "name.minikube.sigs.k8s.io": "functional-837733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a77406ee58f25229e293566e783bf43170d9fa44b6689b49388c639e85fc50f",
	            "SandboxKey": "/var/run/docker/netns/2a77406ee58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38473"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-837733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ab:24:dc:0a:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f89aaec1de6551699109f76befe1f814060391f76cfbf4e2495e4998bf792bc0",
	                    "EndpointID": "1003f55ccf434f50f9eb92d19afe4a0c2a92af38625f8b12bdfbc8af058ee656",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-837733",
	                        "0f3f5ed91538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-837733 -n functional-837733: exit status 2 (420.92069ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 logs -n 25: (1.462881412s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-837733 ssh sudo crictl images                                                                                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh     │ functional-837733 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                          │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh     │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ cache   │ functional-837733 cache reload                                                                                                                              │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ ssh     │ functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │ 22 Dec 25 00:33 UTC │
	│ kubectl │ functional-837733 kubectl -- --context functional-837733 get pods                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ start   │ -p functional-837733 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                    │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:33 UTC │                     │
	│ cp      │ functional-837733 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                          │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ config  │ functional-837733 config unset cpus                                                                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ config  │ functional-837733 config get cpus                                                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ config  │ functional-837733 config set cpus 2                                                                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ config  │ functional-837733 config get cpus                                                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ config  │ functional-837733 config unset cpus                                                                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh -n functional-837733 sudo cat /home/docker/cp-test.txt                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ config  │ functional-837733 config get cpus                                                                                                                           │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ license │                                                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ cp      │ functional-837733 cp functional-837733:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm866299650/001/cp-test.txt │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo systemctl is-active docker                                                                                                       │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ ssh     │ functional-837733 ssh -n functional-837733 sudo cat /home/docker/cp-test.txt                                                                                │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh sudo systemctl is-active containerd                                                                                                   │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	│ cp      │ functional-837733 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ ssh     │ functional-837733 ssh -n functional-837733 sudo cat /tmp/does/not/exist/cp-test.txt                                                                         │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │ 22 Dec 25 00:45 UTC │
	│ image   │ functional-837733 image load --daemon kicbase/echo-server:functional-837733 --alsologtostderr                                                               │ functional-837733 │ jenkins │ v1.37.0 │ 22 Dec 25 00:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:33:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:33:19.123176 1921402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:33:19.123288 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123297 1921402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:33:19.123301 1921402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:33:19.123533 1921402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:33:19.123864 1921402 out.go:368] Setting JSON to false
	I1222 00:33:19.124702 1921402 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":112549,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:33:19.124753 1921402 start.go:143] virtualization:  
	I1222 00:33:19.129939 1921402 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:33:19.132820 1921402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:33:19.132934 1921402 notify.go:221] Checking for updates...
	I1222 00:33:19.138397 1921402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:33:19.141280 1921402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:33:19.144132 1921402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:33:19.147015 1921402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:33:19.149703 1921402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:33:19.152994 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:19.153132 1921402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:33:19.180144 1921402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:33:19.180247 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.239273 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.229718138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.239382 1921402 docker.go:319] overlay module found
	I1222 00:33:19.242404 1921402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:33:19.245466 1921402 start.go:309] selected driver: docker
	I1222 00:33:19.245476 1921402 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.245601 1921402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:33:19.245707 1921402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:33:19.303768 1921402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:33:19.294614837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:33:19.304252 1921402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:33:19.304275 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:19.304327 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:19.304366 1921402 start.go:353] cluster config:
	{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:19.307509 1921402 out.go:179] * Starting "functional-837733" primary control-plane node in "functional-837733" cluster
	I1222 00:33:19.310426 1921402 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:33:19.313438 1921402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:33:19.316296 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:19.316334 1921402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:33:19.316337 1921402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:33:19.316345 1921402 cache.go:65] Caching tarball of preloaded images
	I1222 00:33:19.316452 1921402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 00:33:19.316461 1921402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:33:19.316560 1921402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/config.json ...
	I1222 00:33:19.335533 1921402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:33:19.335544 1921402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:33:19.335564 1921402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:33:19.335600 1921402 start.go:360] acquireMachinesLock for functional-837733: {Name:mkc2d2904865449602a16223c8b82cc2817ccda0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:33:19.335668 1921402 start.go:364] duration metric: took 47.728µs to acquireMachinesLock for "functional-837733"
	I1222 00:33:19.335688 1921402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:33:19.335699 1921402 fix.go:54] fixHost starting: 
	I1222 00:33:19.335947 1921402 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
	I1222 00:33:19.365010 1921402 fix.go:112] recreateIfNeeded on functional-837733: state=Running err=<nil>
	W1222 00:33:19.365031 1921402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:33:19.368348 1921402 out.go:252] * Updating the running docker "functional-837733" container ...
	I1222 00:33:19.368376 1921402 machine.go:94] provisionDockerMachine start ...
	I1222 00:33:19.368449 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.391712 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.392069 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.392078 1921402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:33:19.528918 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.528932 1921402 ubuntu.go:182] provisioning hostname "functional-837733"
	I1222 00:33:19.529008 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.546388 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.546693 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.546705 1921402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-837733 && echo "functional-837733" | sudo tee /etc/hostname
	I1222 00:33:19.690177 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-837733
	
	I1222 00:33:19.690264 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.708307 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:19.708609 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:19.708623 1921402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-837733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-837733/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-837733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:33:19.841808 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:33:19.841826 1921402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 00:33:19.841843 1921402 ubuntu.go:190] setting up certificates
	I1222 00:33:19.841860 1921402 provision.go:84] configureAuth start
	I1222 00:33:19.841918 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:19.861819 1921402 provision.go:143] copyHostCerts
	I1222 00:33:19.861874 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 00:33:19.861889 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 00:33:19.861964 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 00:33:19.862062 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 00:33:19.862066 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 00:33:19.862089 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 00:33:19.862134 1921402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 00:33:19.862137 1921402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 00:33:19.862158 1921402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 00:33:19.862198 1921402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.functional-837733 san=[127.0.0.1 192.168.49.2 functional-837733 localhost minikube]
	I1222 00:33:19.952084 1921402 provision.go:177] copyRemoteCerts
	I1222 00:33:19.952138 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:33:19.952182 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:19.970262 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.085780 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:33:20.107727 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:33:20.128189 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:33:20.148142 1921402 provision.go:87] duration metric: took 306.259546ms to configureAuth
	I1222 00:33:20.148170 1921402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:33:20.148393 1921402 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:33:20.148508 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.170002 1921402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:33:20.170298 1921402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38471 <nil> <nil>}
	I1222 00:33:20.170316 1921402 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 00:33:20.512165 1921402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 00:33:20.512178 1921402 machine.go:97] duration metric: took 1.143796307s to provisionDockerMachine
	I1222 00:33:20.512211 1921402 start.go:293] postStartSetup for "functional-837733" (driver="docker")
	I1222 00:33:20.512232 1921402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:33:20.512310 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:33:20.512363 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.530913 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.629793 1921402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:33:20.633223 1921402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:33:20.633240 1921402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:33:20.633250 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 00:33:20.633308 1921402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 00:33:20.633414 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 00:33:20.633491 1921402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts -> hosts in /etc/test/nested/copy/1881219
	I1222 00:33:20.633538 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1881219
	I1222 00:33:20.641126 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:20.658985 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts --> /etc/test/nested/copy/1881219/hosts (40 bytes)
	I1222 00:33:20.677182 1921402 start.go:296] duration metric: took 164.956569ms for postStartSetup
	I1222 00:33:20.677276 1921402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:33:20.677314 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.694936 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.786654 1921402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:33:20.791424 1921402 fix.go:56] duration metric: took 1.455724646s for fixHost
	I1222 00:33:20.791440 1921402 start.go:83] releasing machines lock for "functional-837733", held for 1.455763874s
	I1222 00:33:20.791545 1921402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-837733
	I1222 00:33:20.808426 1921402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:33:20.808469 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.808475 1921402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:33:20.808537 1921402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
	I1222 00:33:20.830636 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.834199 1921402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
	I1222 00:33:20.933012 1921402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:33:21.025436 1921402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 00:33:21.067278 1921402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:33:21.071776 1921402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:33:21.071868 1921402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:33:21.079967 1921402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:33:21.079982 1921402 start.go:496] detecting cgroup driver to use...
	I1222 00:33:21.080013 1921402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:33:21.080062 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 00:33:21.095680 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 00:33:21.108798 1921402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:33:21.108852 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:33:21.124535 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:33:21.138031 1921402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:33:21.254740 1921402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:33:21.374618 1921402 docker.go:234] disabling docker service ...
	I1222 00:33:21.374676 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:33:21.389413 1921402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:33:21.402447 1921402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:33:21.520019 1921402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:33:21.643581 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:33:21.657853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:33:21.671138 1921402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 00:33:21.671202 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.680093 1921402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 00:33:21.680158 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.688698 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.697375 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.705901 1921402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:33:21.713923 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.722872 1921402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.731197 1921402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 00:33:21.739610 1921402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:33:21.747277 1921402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:33:21.754578 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:21.862305 1921402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 00:33:22.033055 1921402 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 00:33:22.033138 1921402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 00:33:22.037243 1921402 start.go:564] Will wait 60s for crictl version
	I1222 00:33:22.037298 1921402 ssh_runner.go:195] Run: which crictl
	I1222 00:33:22.040800 1921402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:33:22.068746 1921402 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 00:33:22.068856 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.098980 1921402 ssh_runner.go:195] Run: crio --version
	I1222 00:33:22.132885 1921402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 00:33:22.135905 1921402 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:33:22.151401 1921402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:33:22.158212 1921402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:33:22.160827 1921402 kubeadm.go:884] updating cluster {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:33:22.160965 1921402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:33:22.161027 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.202970 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.202982 1921402 crio.go:433] Images already preloaded, skipping extraction
	I1222 00:33:22.203036 1921402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:33:22.229110 1921402 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 00:33:22.229123 1921402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:33:22.229129 1921402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1222 00:33:22.229228 1921402 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-837733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:33:22.229311 1921402 ssh_runner.go:195] Run: crio config
	I1222 00:33:22.300174 1921402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:33:22.300213 1921402 cni.go:84] Creating CNI manager for ""
	I1222 00:33:22.300224 1921402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:33:22.300232 1921402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:33:22.300257 1921402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-837733 NodeName:functional-837733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:33:22.300399 1921402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-837733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:33:22.300509 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:33:22.308005 1921402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:33:22.308080 1921402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:33:22.315446 1921402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 00:33:22.327549 1921402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:33:22.340301 1921402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1222 00:33:22.356214 1921402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:33:22.360143 1921402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:33:22.489828 1921402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:33:23.096445 1921402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733 for IP: 192.168.49.2
	I1222 00:33:23.096455 1921402 certs.go:195] generating shared ca certs ...
	I1222 00:33:23.096469 1921402 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:33:23.096601 1921402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 00:33:23.096676 1921402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 00:33:23.096682 1921402 certs.go:257] generating profile certs ...
	I1222 00:33:23.096765 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.key
	I1222 00:33:23.096814 1921402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key.ceb2a50e
	I1222 00:33:23.096847 1921402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key
	I1222 00:33:23.097000 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 00:33:23.097032 1921402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 00:33:23.097039 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 00:33:23.097063 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:33:23.097085 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:33:23.097109 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 00:33:23.097150 1921402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 00:33:23.097934 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:33:23.122733 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:33:23.143796 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:33:23.165783 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:33:23.184349 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:33:23.201211 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:33:23.218537 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:33:23.235950 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 00:33:23.253260 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 00:33:23.270022 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 00:33:23.287385 1921402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:33:23.304167 1921402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:33:23.316581 1921402 ssh_runner.go:195] Run: openssl version
	I1222 00:33:23.322651 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.329731 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 00:33:23.336799 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340358 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.340410 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 00:33:23.381351 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:33:23.388858 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.396030 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:33:23.403636 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407354 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.407408 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:33:23.449165 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:33:23.456643 1921402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.463729 1921402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 00:33:23.471167 1921402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474893 1921402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.474950 1921402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 00:33:23.516093 1921402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:33:23.523732 1921402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:33:23.527686 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:33:23.568782 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:33:23.609619 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:33:23.650314 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:33:23.696111 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:33:23.737413 1921402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:33:23.778391 1921402 kubeadm.go:401] StartCluster: {Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:33:23.778492 1921402 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 00:33:23.778592 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.806800 1921402 cri.go:96] found id: ""
	I1222 00:33:23.806880 1921402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:33:23.814609 1921402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:33:23.814618 1921402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:33:23.814668 1921402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:33:23.821980 1921402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.822486 1921402 kubeconfig.go:125] found "functional-837733" server: "https://192.168.49.2:8441"
	I1222 00:33:23.823779 1921402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:33:23.831691 1921402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:18:48.647493207 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:33:22.349448241 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:33:23.831708 1921402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:33:23.831719 1921402 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1222 00:33:23.831773 1921402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:33:23.866496 1921402 cri.go:96] found id: ""
	I1222 00:33:23.866558 1921402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:33:23.889134 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:33:23.897048 1921402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 22 00:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 00:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:22 /etc/kubernetes/scheduler.conf
	
	I1222 00:33:23.897106 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:33:23.904740 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:33:23.912683 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.912741 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:33:23.920107 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.927907 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.927961 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:33:23.935547 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:33:23.943201 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:33:23.943254 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:33:23.950967 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:33:23.959359 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:24.006017 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.352657 1921402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346615684s)
	I1222 00:33:25.352716 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.546706 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.608812 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:33:25.656813 1921402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:33:25.656915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.157087 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:26.657071 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.157877 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:27.657904 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:28.657224 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.157645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:29.657086 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.157778 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:30.657101 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:31.657140 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:32.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.157596 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:33.657091 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.157117 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:34.657826 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.157144 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:35.657188 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.157633 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:36.657483 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.157012 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:37.657072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.157945 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:38.657088 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.157578 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:39.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:40.657163 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.157973 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:41.657614 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.157096 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:42.657134 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.157828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:43.657830 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.157616 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:44.657109 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.157582 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:45.657114 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:46.657017 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.157935 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:47.657589 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.157909 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:48.657385 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.157576 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:49.657671 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.157072 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:50.657099 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.157849 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:51.657233 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.157065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:52.657643 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.157064 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:53.657808 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.157659 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:54.657443 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.157202 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:55.657314 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.157042 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:56.657861 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.157451 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:57.657599 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.157067 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:58.657586 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.157353 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:33:59.656976 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.181504 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:00.657937 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:01.657874 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.157195 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:02.657832 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.157502 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:03.657928 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.158003 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:04.657704 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.157139 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:05.657745 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.157058 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:06.657102 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.157987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:07.658025 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.157337 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:08.656989 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.157695 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:09.657652 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.157707 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:10.657722 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.157334 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:11.657276 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.157900 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:12.657113 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.157804 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:13.657609 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.157350 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:14.657597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.157180 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:15.658004 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.157770 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:16.657721 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.157789 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:17.657835 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.157788 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:18.657645 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.157603 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:19.657417 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.157127 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:20.657971 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.157737 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:21.657060 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.157985 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:22.657010 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.157051 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:23.657970 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.157033 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:24.657732 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.157680 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:25.657000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:25.657101 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:25.682430 1921402 cri.go:96] found id: ""
	I1222 00:34:25.682444 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.682451 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:25.682457 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:25.682521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:25.708391 1921402 cri.go:96] found id: ""
	I1222 00:34:25.708406 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.708414 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:25.708420 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:25.708495 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:25.735246 1921402 cri.go:96] found id: ""
	I1222 00:34:25.735261 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.735268 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:25.735273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:25.735330 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:25.764584 1921402 cri.go:96] found id: ""
	I1222 00:34:25.764599 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.764606 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:25.764611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:25.764672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:25.790407 1921402 cri.go:96] found id: ""
	I1222 00:34:25.790421 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.790428 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:25.790434 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:25.790492 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:25.816256 1921402 cri.go:96] found id: ""
	I1222 00:34:25.816270 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.816278 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:25.816283 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:25.816345 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:25.842813 1921402 cri.go:96] found id: ""
	I1222 00:34:25.842828 1921402 logs.go:282] 0 containers: []
	W1222 00:34:25.842835 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:25.842843 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:25.842853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:25.914526 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:25.914546 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:25.948008 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:25.948024 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:26.016574 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:26.016596 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:26.032648 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:26.032666 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:26.102628 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:26.094028   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.094916   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.096594   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.097214   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:26.098390   11060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.604431 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:28.618535 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:28.618599 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:28.647904 1921402 cri.go:96] found id: ""
	I1222 00:34:28.647920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.647927 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:28.647933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:28.647987 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:28.674397 1921402 cri.go:96] found id: ""
	I1222 00:34:28.674411 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.674418 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:28.674423 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:28.674481 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:28.704093 1921402 cri.go:96] found id: ""
	I1222 00:34:28.704107 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.704114 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:28.704129 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:28.704187 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:28.729449 1921402 cri.go:96] found id: ""
	I1222 00:34:28.729463 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.729470 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:28.729475 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:28.729535 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:28.754277 1921402 cri.go:96] found id: ""
	I1222 00:34:28.754292 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.754300 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:28.754305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:28.754366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:28.779419 1921402 cri.go:96] found id: ""
	I1222 00:34:28.779445 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.779453 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:28.779459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:28.779526 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:28.804826 1921402 cri.go:96] found id: ""
	I1222 00:34:28.804841 1921402 logs.go:282] 0 containers: []
	W1222 00:34:28.804848 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:28.804855 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:28.804866 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:28.871702 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:28.871720 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:28.886327 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:28.886342 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:28.950915 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:28.942587   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.943003   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.944829   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.945306   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:28.946899   11151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:28.950935 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:28.950945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:29.021787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:29.021807 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.549965 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:31.559848 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:31.559909 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:31.586473 1921402 cri.go:96] found id: ""
	I1222 00:34:31.586487 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.586494 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:31.586500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:31.586557 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:31.619635 1921402 cri.go:96] found id: ""
	I1222 00:34:31.619649 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.619657 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:31.619662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:31.619718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:31.649990 1921402 cri.go:96] found id: ""
	I1222 00:34:31.650004 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.650011 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:31.650017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:31.650079 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:31.676339 1921402 cri.go:96] found id: ""
	I1222 00:34:31.676354 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.676361 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:31.676366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:31.676424 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:31.704084 1921402 cri.go:96] found id: ""
	I1222 00:34:31.704098 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.704105 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:31.704110 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:31.704170 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:31.728706 1921402 cri.go:96] found id: ""
	I1222 00:34:31.728720 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.728727 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:31.728732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:31.728790 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:31.756276 1921402 cri.go:96] found id: ""
	I1222 00:34:31.756290 1921402 logs.go:282] 0 containers: []
	W1222 00:34:31.756297 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:31.756304 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:31.756314 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:31.825066 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:31.825085 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:31.853826 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:31.853841 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:31.920106 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:31.920125 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:31.934967 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:31.934983 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:31.996347 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:31.988158   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.988788   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990297   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.990881   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:31.992410   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.497426 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:34.507616 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:34.507676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:34.531742 1921402 cri.go:96] found id: ""
	I1222 00:34:34.531755 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.531763 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:34.531769 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:34.531843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:34.559670 1921402 cri.go:96] found id: ""
	I1222 00:34:34.559683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.559691 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:34.559695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:34.559762 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:34.587516 1921402 cri.go:96] found id: ""
	I1222 00:34:34.587531 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.587538 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:34.587543 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:34.587621 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:34.618669 1921402 cri.go:96] found id: ""
	I1222 00:34:34.618683 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.618690 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:34.618695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:34.618754 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:34.655347 1921402 cri.go:96] found id: ""
	I1222 00:34:34.655361 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.655368 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:34.655373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:34.655429 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:34.680612 1921402 cri.go:96] found id: ""
	I1222 00:34:34.680626 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.680643 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:34.680648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:34.680718 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:34.706049 1921402 cri.go:96] found id: ""
	I1222 00:34:34.706070 1921402 logs.go:282] 0 containers: []
	W1222 00:34:34.706078 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:34.706085 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:34.706095 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:34.774755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:34.774774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:34.789897 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:34.789917 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:34.854675 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:34.846401   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.846935   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848378   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.848883   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:34.850425   11365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:34.854686 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:34.854696 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:34.923449 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:34.923469 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.451135 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:37.461580 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:37.461643 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:37.488448 1921402 cri.go:96] found id: ""
	I1222 00:34:37.488474 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.488482 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:37.488488 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:37.488544 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:37.513499 1921402 cri.go:96] found id: ""
	I1222 00:34:37.513514 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.513526 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:37.513531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:37.513595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:37.539878 1921402 cri.go:96] found id: ""
	I1222 00:34:37.539892 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.539899 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:37.539905 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:37.539979 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:37.568838 1921402 cri.go:96] found id: ""
	I1222 00:34:37.568853 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.568860 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:37.568866 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:37.568934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:37.604110 1921402 cri.go:96] found id: ""
	I1222 00:34:37.604124 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.604131 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:37.604156 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:37.604212 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:37.639214 1921402 cri.go:96] found id: ""
	I1222 00:34:37.639228 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.639235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:37.639241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:37.639301 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:37.674749 1921402 cri.go:96] found id: ""
	I1222 00:34:37.674763 1921402 logs.go:282] 0 containers: []
	W1222 00:34:37.674771 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:37.674779 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:37.674790 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:37.709493 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:37.709513 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:37.781144 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:37.781164 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:37.797445 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:37.797461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:37.864668 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:37.856116   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.856660   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858305   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.858742   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:37.860244   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:37.864678 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:37.864688 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.444815 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:40.456172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:40.456239 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:40.482752 1921402 cri.go:96] found id: ""
	I1222 00:34:40.482774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.482781 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:40.482786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:40.482846 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:40.508000 1921402 cri.go:96] found id: ""
	I1222 00:34:40.508014 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.508021 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:40.508026 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:40.508087 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:40.533866 1921402 cri.go:96] found id: ""
	I1222 00:34:40.533881 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.533889 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:40.533894 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:40.533955 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:40.559781 1921402 cri.go:96] found id: ""
	I1222 00:34:40.559794 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.559802 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:40.559807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:40.559866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:40.586989 1921402 cri.go:96] found id: ""
	I1222 00:34:40.587003 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.587010 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:40.587016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:40.587074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:40.619369 1921402 cri.go:96] found id: ""
	I1222 00:34:40.619384 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.619400 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:40.619405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:40.619462 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:40.648846 1921402 cri.go:96] found id: ""
	I1222 00:34:40.648870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:40.648878 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:40.648886 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:40.648896 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:40.720161 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:40.720180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:40.736116 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:40.736141 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:40.808179 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:40.798969   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.799748   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.801565   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.802206   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:40.803919   11574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:40.808193 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:40.808204 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:40.876486 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:40.876505 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.409595 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:43.419504 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:43.419564 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:43.444338 1921402 cri.go:96] found id: ""
	I1222 00:34:43.444360 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.444368 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:43.444373 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:43.444430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:43.471938 1921402 cri.go:96] found id: ""
	I1222 00:34:43.471953 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.471960 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:43.471965 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:43.472025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:43.497576 1921402 cri.go:96] found id: ""
	I1222 00:34:43.497590 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.497597 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:43.497611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:43.497667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:43.526748 1921402 cri.go:96] found id: ""
	I1222 00:34:43.526762 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.526769 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:43.526775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:43.526838 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:43.556676 1921402 cri.go:96] found id: ""
	I1222 00:34:43.556691 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.556698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:43.556704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:43.556766 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:43.584425 1921402 cri.go:96] found id: ""
	I1222 00:34:43.584440 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.584447 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:43.584453 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:43.584514 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:43.619760 1921402 cri.go:96] found id: ""
	I1222 00:34:43.619774 1921402 logs.go:282] 0 containers: []
	W1222 00:34:43.619782 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:43.619789 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:43.619813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:43.693963 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:43.693985 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:43.722757 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:43.722773 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:43.790316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:43.790336 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:43.804810 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:43.804829 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:43.866141 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:43.858136   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.859002   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860616   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.860933   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:43.862365   11692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.366416 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:46.376013 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:46.376073 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:46.402801 1921402 cri.go:96] found id: ""
	I1222 00:34:46.402815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.402822 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:46.402828 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:46.402884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:46.431904 1921402 cri.go:96] found id: ""
	I1222 00:34:46.431920 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.431927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:46.431933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:46.431991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:46.456574 1921402 cri.go:96] found id: ""
	I1222 00:34:46.456588 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.456594 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:46.456600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:46.456656 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:46.481542 1921402 cri.go:96] found id: ""
	I1222 00:34:46.481556 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.481563 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:46.481569 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:46.481627 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:46.507668 1921402 cri.go:96] found id: ""
	I1222 00:34:46.507682 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.507689 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:46.507694 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:46.507756 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:46.531662 1921402 cri.go:96] found id: ""
	I1222 00:34:46.531675 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.531683 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:46.531688 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:46.531746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:46.558345 1921402 cri.go:96] found id: ""
	I1222 00:34:46.558359 1921402 logs.go:282] 0 containers: []
	W1222 00:34:46.558367 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:46.558375 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:46.558385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:46.630348 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:46.630367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:46.648136 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:46.648153 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:46.720659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:46.712582   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.713249   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.714935   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.715244   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:46.716733   11788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:46.720669 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:46.720689 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:46.790237 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:46.790255 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.320987 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:49.331027 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:49.331083 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:49.355873 1921402 cri.go:96] found id: ""
	I1222 00:34:49.355887 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.355903 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:49.355909 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:49.355967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:49.381127 1921402 cri.go:96] found id: ""
	I1222 00:34:49.381151 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.381158 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:49.381163 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:49.381231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:49.407961 1921402 cri.go:96] found id: ""
	I1222 00:34:49.407983 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.407990 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:49.407995 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:49.408062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:49.437856 1921402 cri.go:96] found id: ""
	I1222 00:34:49.437870 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.437877 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:49.437883 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:49.437940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:49.462901 1921402 cri.go:96] found id: ""
	I1222 00:34:49.462914 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.462921 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:49.462927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:49.462981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:49.487289 1921402 cri.go:96] found id: ""
	I1222 00:34:49.487303 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.487309 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:49.487314 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:49.487372 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:49.513186 1921402 cri.go:96] found id: ""
	I1222 00:34:49.513200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:49.513208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:49.513215 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:49.513225 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:49.527889 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:49.527904 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:49.591170 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:49.583185   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.583819   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585359   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.585923   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:49.587416   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:49.591180 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:49.591189 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:49.670799 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:49.670817 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:49.698837 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:49.698853 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.268261 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:52.278246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:52.278304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:52.303014 1921402 cri.go:96] found id: ""
	I1222 00:34:52.303035 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.303043 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:52.303048 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:52.303114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:52.331907 1921402 cri.go:96] found id: ""
	I1222 00:34:52.331921 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.331928 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:52.331933 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:52.331989 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:52.357227 1921402 cri.go:96] found id: ""
	I1222 00:34:52.357240 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.357247 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:52.357253 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:52.357315 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:52.383730 1921402 cri.go:96] found id: ""
	I1222 00:34:52.383745 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.383752 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:52.383758 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:52.383819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:52.409605 1921402 cri.go:96] found id: ""
	I1222 00:34:52.409620 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.409636 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:52.409641 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:52.409701 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:52.438817 1921402 cri.go:96] found id: ""
	I1222 00:34:52.438831 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.438838 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:52.438843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:52.438901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:52.464983 1921402 cri.go:96] found id: ""
	I1222 00:34:52.465007 1921402 logs.go:282] 0 containers: []
	W1222 00:34:52.465016 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:52.465027 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:52.465038 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:52.533460 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:52.525527   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.526139   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.527954   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.528538   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:52.529641   11988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:52.533470 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:52.533493 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:52.602597 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:52.602617 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:52.636250 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:52.636266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:52.708981 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:52.709002 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.224727 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:55.235168 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:55.235225 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:55.260911 1921402 cri.go:96] found id: ""
	I1222 00:34:55.260926 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.260933 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:55.260938 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:55.260994 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:55.286615 1921402 cri.go:96] found id: ""
	I1222 00:34:55.286634 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.286651 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:55.286657 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:55.286736 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:55.311470 1921402 cri.go:96] found id: ""
	I1222 00:34:55.311484 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.311492 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:55.311497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:55.311555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:55.336659 1921402 cri.go:96] found id: ""
	I1222 00:34:55.336673 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.336689 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:55.336695 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:55.336755 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:55.368789 1921402 cri.go:96] found id: ""
	I1222 00:34:55.368815 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.368823 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:55.368829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:55.368895 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:55.394360 1921402 cri.go:96] found id: ""
	I1222 00:34:55.394374 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.394381 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:55.394387 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:55.394448 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:55.419182 1921402 cri.go:96] found id: ""
	I1222 00:34:55.419196 1921402 logs.go:282] 0 containers: []
	W1222 00:34:55.419204 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:55.419211 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:55.419224 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:55.486298 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:55.486316 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:55.501160 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:55.501179 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:55.569075 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:55.560917   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.561630   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563245   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.563569   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:55.565124   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:34:55.569085 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:55.569105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:55.643182 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:55.643207 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.182663 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:34:58.193476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:34:58.193537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:34:58.219859 1921402 cri.go:96] found id: ""
	I1222 00:34:58.219873 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.219880 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:34:58.219886 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:34:58.219944 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:34:58.245686 1921402 cri.go:96] found id: ""
	I1222 00:34:58.245700 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.245707 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:34:58.245712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:34:58.245769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:34:58.272173 1921402 cri.go:96] found id: ""
	I1222 00:34:58.272187 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.272194 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:34:58.272199 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:34:58.272258 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:34:58.296505 1921402 cri.go:96] found id: ""
	I1222 00:34:58.296519 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.296526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:34:58.296532 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:34:58.296590 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:34:58.321897 1921402 cri.go:96] found id: ""
	I1222 00:34:58.321911 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.321919 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:34:58.321924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:34:58.321981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:34:58.347185 1921402 cri.go:96] found id: ""
	I1222 00:34:58.347200 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.347206 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:34:58.347212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:34:58.347272 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:34:58.371593 1921402 cri.go:96] found id: ""
	I1222 00:34:58.371607 1921402 logs.go:282] 0 containers: []
	W1222 00:34:58.371614 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:34:58.371621 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:34:58.371631 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:34:58.439841 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:34:58.439861 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:34:58.471817 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:34:58.471833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:34:58.540043 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:34:58.540062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:34:58.556739 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:34:58.556755 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:34:58.632107 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:34:58.623307   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.624122   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626325   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.626634   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:34:58.628169   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.133446 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:01.145134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:01.145201 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:01.174836 1921402 cri.go:96] found id: ""
	I1222 00:35:01.174853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.174861 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:01.174867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:01.174943 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:01.202691 1921402 cri.go:96] found id: ""
	I1222 00:35:01.202706 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.202715 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:01.202721 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:01.202785 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:01.232555 1921402 cri.go:96] found id: ""
	I1222 00:35:01.232578 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.232586 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:01.232592 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:01.232687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:01.260828 1921402 cri.go:96] found id: ""
	I1222 00:35:01.260853 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.260861 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:01.260867 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:01.260954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:01.290608 1921402 cri.go:96] found id: ""
	I1222 00:35:01.290623 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.290630 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:01.290635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:01.290721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:01.321704 1921402 cri.go:96] found id: ""
	I1222 00:35:01.321719 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.321726 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:01.321732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:01.321802 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:01.349512 1921402 cri.go:96] found id: ""
	I1222 00:35:01.349526 1921402 logs.go:282] 0 containers: []
	W1222 00:35:01.349533 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:01.349540 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:01.349551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:01.385559 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:01.385575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:01.457873 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:01.457900 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:01.473910 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:01.473928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:01.540419 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:01.531588   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.532564   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534241   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.534911   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:01.536515   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:01.540431 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:01.540442 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.114962 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:04.126202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:04.126262 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:04.157781 1921402 cri.go:96] found id: ""
	I1222 00:35:04.157795 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.157802 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:04.157807 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:04.157868 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:04.185090 1921402 cri.go:96] found id: ""
	I1222 00:35:04.185104 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.185114 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:04.185119 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:04.185189 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:04.210432 1921402 cri.go:96] found id: ""
	I1222 00:35:04.210446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.210453 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:04.210458 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:04.210518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:04.235769 1921402 cri.go:96] found id: ""
	I1222 00:35:04.235783 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.235791 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:04.235797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:04.235853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:04.260844 1921402 cri.go:96] found id: ""
	I1222 00:35:04.260858 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.260867 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:04.260872 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:04.260940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:04.286088 1921402 cri.go:96] found id: ""
	I1222 00:35:04.286103 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.286110 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:04.286115 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:04.286171 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:04.311902 1921402 cri.go:96] found id: ""
	I1222 00:35:04.311916 1921402 logs.go:282] 0 containers: []
	W1222 00:35:04.311923 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:04.311930 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:04.311945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:04.383800 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:04.383820 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:04.398944 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:04.398960 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:04.463648 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:04.455346   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.455995   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.457684   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.458285   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:04.459926   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:04.463658 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:04.463668 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:04.533070 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:04.533091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:07.062949 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:07.072997 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:07.073058 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:07.104724 1921402 cri.go:96] found id: ""
	I1222 00:35:07.104739 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.104746 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:07.104751 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:07.104808 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:07.142683 1921402 cri.go:96] found id: ""
	I1222 00:35:07.142697 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.142704 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:07.142712 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:07.142772 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:07.168569 1921402 cri.go:96] found id: ""
	I1222 00:35:07.168583 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.168590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:07.168596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:07.168654 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:07.196279 1921402 cri.go:96] found id: ""
	I1222 00:35:07.196295 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.196302 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:07.196307 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:07.196366 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:07.220862 1921402 cri.go:96] found id: ""
	I1222 00:35:07.220876 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.220883 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:07.220888 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:07.220967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:07.247783 1921402 cri.go:96] found id: ""
	I1222 00:35:07.247797 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.247804 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:07.247810 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:07.247866 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:07.272632 1921402 cri.go:96] found id: ""
	I1222 00:35:07.272658 1921402 logs.go:282] 0 containers: []
	W1222 00:35:07.272668 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:07.272675 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:07.272685 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:07.342006 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:07.342026 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:07.357444 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:07.357461 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:07.428354 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:07.420361   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.420963   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422429   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.422913   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:07.424462   12513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:07.428365 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:07.428376 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:07.499031 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:07.499051 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.035662 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:10.047188 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:10.047256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:10.074669 1921402 cri.go:96] found id: ""
	I1222 00:35:10.074694 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.074702 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:10.074708 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:10.074775 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:10.111688 1921402 cri.go:96] found id: ""
	I1222 00:35:10.111702 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.111710 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:10.111716 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:10.111816 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:10.141600 1921402 cri.go:96] found id: ""
	I1222 00:35:10.141614 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.141622 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:10.141635 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:10.141697 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:10.170620 1921402 cri.go:96] found id: ""
	I1222 00:35:10.170635 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.170642 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:10.170648 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:10.170709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:10.196076 1921402 cri.go:96] found id: ""
	I1222 00:35:10.196090 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.196098 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:10.196111 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:10.196173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:10.222435 1921402 cri.go:96] found id: ""
	I1222 00:35:10.222449 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.222456 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:10.222461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:10.222518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:10.248319 1921402 cri.go:96] found id: ""
	I1222 00:35:10.248333 1921402 logs.go:282] 0 containers: []
	W1222 00:35:10.248350 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:10.248356 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:10.248367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:10.263245 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:10.263267 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:10.325942 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:10.316535   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.317175   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.318874   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.319613   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:10.321408   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:10.325966 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:10.325977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:10.395942 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:10.395961 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:10.423671 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:10.423686 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:12.990380 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:13.000511 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:13.000585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:13.028285 1921402 cri.go:96] found id: ""
	I1222 00:35:13.028299 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.028306 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:13.028311 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:13.028380 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:13.055795 1921402 cri.go:96] found id: ""
	I1222 00:35:13.055809 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.055816 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:13.055821 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:13.055878 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:13.083562 1921402 cri.go:96] found id: ""
	I1222 00:35:13.083576 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.083584 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:13.083589 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:13.083647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:13.123340 1921402 cri.go:96] found id: ""
	I1222 00:35:13.123353 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.123360 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:13.123366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:13.123422 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:13.160293 1921402 cri.go:96] found id: ""
	I1222 00:35:13.160306 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.160314 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:13.160321 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:13.160393 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:13.186389 1921402 cri.go:96] found id: ""
	I1222 00:35:13.186403 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.186410 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:13.186415 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:13.186470 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:13.214187 1921402 cri.go:96] found id: ""
	I1222 00:35:13.214202 1921402 logs.go:282] 0 containers: []
	W1222 00:35:13.214208 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:13.214216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:13.214228 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:13.241162 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:13.241178 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:13.306852 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:13.306871 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:13.322142 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:13.322158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:13.389384 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:13.381302   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.382075   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383539   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.383944   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:13.385574   12724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:13.389394 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:13.389419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:15.963833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:15.973870 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:15.973930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:15.999121 1921402 cri.go:96] found id: ""
	I1222 00:35:15.999135 1921402 logs.go:282] 0 containers: []
	W1222 00:35:15.999142 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:15.999147 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:15.999203 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:16.027303 1921402 cri.go:96] found id: ""
	I1222 00:35:16.027317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.027324 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:16.027330 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:16.027390 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:16.056726 1921402 cri.go:96] found id: ""
	I1222 00:35:16.056740 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.056747 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:16.056753 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:16.056809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:16.082920 1921402 cri.go:96] found id: ""
	I1222 00:35:16.082934 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.082941 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:16.082946 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:16.083011 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:16.120145 1921402 cri.go:96] found id: ""
	I1222 00:35:16.120160 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.120167 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:16.120172 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:16.120228 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:16.149800 1921402 cri.go:96] found id: ""
	I1222 00:35:16.149814 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.149821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:16.149827 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:16.149883 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:16.175126 1921402 cri.go:96] found id: ""
	I1222 00:35:16.175140 1921402 logs.go:282] 0 containers: []
	W1222 00:35:16.175147 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:16.175155 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:16.175168 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:16.241780 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:16.241800 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:16.256196 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:16.256212 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:16.317803 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:16.309558   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.310155   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.311857   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.312414   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:16.313937   12815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:16.317813 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:16.317824 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:16.386643 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:16.386662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:18.918855 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:18.928947 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:18.929009 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:18.954131 1921402 cri.go:96] found id: ""
	I1222 00:35:18.954151 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.954158 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:18.954169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:18.954229 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:18.978729 1921402 cri.go:96] found id: ""
	I1222 00:35:18.978780 1921402 logs.go:282] 0 containers: []
	W1222 00:35:18.978788 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:18.978793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:18.978859 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:19.006460 1921402 cri.go:96] found id: ""
	I1222 00:35:19.006474 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.006482 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:19.006487 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:19.006547 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:19.035802 1921402 cri.go:96] found id: ""
	I1222 00:35:19.035818 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.035825 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:19.035830 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:19.035889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:19.061703 1921402 cri.go:96] found id: ""
	I1222 00:35:19.061716 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.061723 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:19.061729 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:19.061789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:19.094243 1921402 cri.go:96] found id: ""
	I1222 00:35:19.094256 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.094264 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:19.094269 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:19.094324 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:19.135211 1921402 cri.go:96] found id: ""
	I1222 00:35:19.135238 1921402 logs.go:282] 0 containers: []
	W1222 00:35:19.135246 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:19.135253 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:19.135263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:19.203366 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:19.195265   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.195660   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197152   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.197711   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:19.199424   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:19.203376 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:19.203385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:19.275567 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:19.275585 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:19.303090 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:19.303105 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:19.372108 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:19.372127 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:21.888736 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:21.899401 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:21.899461 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:21.924030 1921402 cri.go:96] found id: ""
	I1222 00:35:21.924045 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.924052 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:21.924057 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:21.924114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:21.950593 1921402 cri.go:96] found id: ""
	I1222 00:35:21.950606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.950613 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:21.950618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:21.950676 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:21.978354 1921402 cri.go:96] found id: ""
	I1222 00:35:21.978368 1921402 logs.go:282] 0 containers: []
	W1222 00:35:21.978375 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:21.978380 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:21.978437 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:22.005709 1921402 cri.go:96] found id: ""
	I1222 00:35:22.005724 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.005734 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:22.005741 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:22.005807 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:22.033771 1921402 cri.go:96] found id: ""
	I1222 00:35:22.033785 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.033793 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:22.033799 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:22.033862 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:22.063584 1921402 cri.go:96] found id: ""
	I1222 00:35:22.063598 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.063606 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:22.063611 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:22.063672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:22.090629 1921402 cri.go:96] found id: ""
	I1222 00:35:22.090643 1921402 logs.go:282] 0 containers: []
	W1222 00:35:22.090651 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:22.090658 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:22.090669 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:22.172686 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:22.172713 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:22.192440 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:22.192459 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:22.261299 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:22.252571   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.253142   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.254600   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.255020   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:22.256724   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:22.261309 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:22.261339 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:22.337282 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:22.337301 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:24.868746 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:24.878846 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:24.878904 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:24.903813 1921402 cri.go:96] found id: ""
	I1222 00:35:24.903828 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.903835 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:24.903841 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:24.903901 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:24.930142 1921402 cri.go:96] found id: ""
	I1222 00:35:24.930157 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.930164 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:24.930169 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:24.930224 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:24.954989 1921402 cri.go:96] found id: ""
	I1222 00:35:24.955009 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.955016 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:24.955022 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:24.955082 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:24.984399 1921402 cri.go:96] found id: ""
	I1222 00:35:24.984413 1921402 logs.go:282] 0 containers: []
	W1222 00:35:24.984421 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:24.984426 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:24.984489 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:25.016621 1921402 cri.go:96] found id: ""
	I1222 00:35:25.016637 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.016644 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:25.016650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:25.016714 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:25.044807 1921402 cri.go:96] found id: ""
	I1222 00:35:25.044821 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.044830 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:25.044835 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:25.044926 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:25.071499 1921402 cri.go:96] found id: ""
	I1222 00:35:25.071514 1921402 logs.go:282] 0 containers: []
	W1222 00:35:25.071521 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:25.071530 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:25.071541 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:25.149156 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:25.149180 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:25.168790 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:25.168813 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:25.235601 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:25.226683   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.227526   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229268   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.229884   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:25.231385   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:25.235611 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:25.235622 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:25.306834 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:25.306854 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:27.843405 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:27.854516 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:27.854584 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:27.881946 1921402 cri.go:96] found id: ""
	I1222 00:35:27.881960 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.881967 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:27.881972 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:27.882034 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:27.911889 1921402 cri.go:96] found id: ""
	I1222 00:35:27.911903 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.911922 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:27.911927 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:27.911984 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:27.938642 1921402 cri.go:96] found id: ""
	I1222 00:35:27.938656 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.938687 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:27.938693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:27.938770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:27.965296 1921402 cri.go:96] found id: ""
	I1222 00:35:27.965317 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.965336 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:27.965342 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:27.965410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:27.990233 1921402 cri.go:96] found id: ""
	I1222 00:35:27.990247 1921402 logs.go:282] 0 containers: []
	W1222 00:35:27.990254 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:27.990260 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:27.990316 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:28.017956 1921402 cri.go:96] found id: ""
	I1222 00:35:28.017970 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.017986 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:28.017993 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:28.018051 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:28.046152 1921402 cri.go:96] found id: ""
	I1222 00:35:28.046174 1921402 logs.go:282] 0 containers: []
	W1222 00:35:28.046182 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:28.046189 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:28.046200 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:28.114636 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:28.114656 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:28.131317 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:28.131333 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:28.201574 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:28.192734   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.193301   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195050   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.195662   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:28.197132   13245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:28.201584 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:28.201594 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:28.273807 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:28.273827 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:30.802833 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:30.812760 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:30.812828 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:30.837592 1921402 cri.go:96] found id: ""
	I1222 00:35:30.837606 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.837613 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:30.837618 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:30.837678 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:30.866275 1921402 cri.go:96] found id: ""
	I1222 00:35:30.866289 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.866296 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:30.866301 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:30.866360 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:30.891264 1921402 cri.go:96] found id: ""
	I1222 00:35:30.891279 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.891286 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:30.891292 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:30.891350 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:30.915256 1921402 cri.go:96] found id: ""
	I1222 00:35:30.915270 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.915277 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:30.915288 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:30.915344 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:30.939875 1921402 cri.go:96] found id: ""
	I1222 00:35:30.939888 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.939895 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:30.939901 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:30.939959 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:30.964213 1921402 cri.go:96] found id: ""
	I1222 00:35:30.964227 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.964235 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:30.964240 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:30.964298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:30.989063 1921402 cri.go:96] found id: ""
	I1222 00:35:30.989076 1921402 logs.go:282] 0 containers: []
	W1222 00:35:30.989084 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:30.989091 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:30.989104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:31.055959 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:31.046326   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.046942   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.048638   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.049317   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:31.051002   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:31.055969 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:31.055980 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:31.130704 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:31.130724 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:31.175086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:31.175102 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:31.241755 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:31.241774 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:33.758213 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:33.768479 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:33.768543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:33.800290 1921402 cri.go:96] found id: ""
	I1222 00:35:33.800304 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.800311 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:33.800317 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:33.800374 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:33.828235 1921402 cri.go:96] found id: ""
	I1222 00:35:33.828249 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.828256 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:33.828261 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:33.828319 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:33.853431 1921402 cri.go:96] found id: ""
	I1222 00:35:33.853446 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.853454 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:33.853459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:33.853517 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:33.878506 1921402 cri.go:96] found id: ""
	I1222 00:35:33.878520 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.878526 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:33.878531 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:33.878589 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:33.903278 1921402 cri.go:96] found id: ""
	I1222 00:35:33.903292 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.903299 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:33.903305 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:33.903361 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:33.931624 1921402 cri.go:96] found id: ""
	I1222 00:35:33.931638 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.931645 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:33.931650 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:33.931709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:33.960994 1921402 cri.go:96] found id: ""
	I1222 00:35:33.961013 1921402 logs.go:282] 0 containers: []
	W1222 00:35:33.961029 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:33.961042 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:33.961053 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:34.028536 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:34.028559 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:34.043699 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:34.043717 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:34.113813 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:34.106074   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.106690   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108250   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.108558   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:34.109853   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:34.113822 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:34.113832 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:34.196400 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:34.196421 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:36.727299 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:36.737834 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:36.737892 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:36.762640 1921402 cri.go:96] found id: ""
	I1222 00:35:36.762654 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.762661 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:36.762666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:36.762722 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:36.789016 1921402 cri.go:96] found id: ""
	I1222 00:35:36.789030 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.789036 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:36.789041 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:36.789097 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:36.814574 1921402 cri.go:96] found id: ""
	I1222 00:35:36.814588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.814595 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:36.814600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:36.814661 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:36.839346 1921402 cri.go:96] found id: ""
	I1222 00:35:36.839361 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.839368 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:36.839374 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:36.839430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:36.868832 1921402 cri.go:96] found id: ""
	I1222 00:35:36.868846 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.868853 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:36.868858 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:36.868930 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:36.894581 1921402 cri.go:96] found id: ""
	I1222 00:35:36.894595 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.894602 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:36.894607 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:36.894664 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:36.924019 1921402 cri.go:96] found id: ""
	I1222 00:35:36.924033 1921402 logs.go:282] 0 containers: []
	W1222 00:35:36.924040 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:36.924047 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:36.924059 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:36.990902 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:36.990921 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:37.008849 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:37.008867 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:37.083336 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:37.074366   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.075135   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.076759   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.077382   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:37.078956   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:37.083345 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:37.083356 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:37.173618 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:37.173638 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:39.706094 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:39.716405 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:39.716478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:39.746846 1921402 cri.go:96] found id: ""
	I1222 00:35:39.746861 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.746868 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:39.746873 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:39.746933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:39.775788 1921402 cri.go:96] found id: ""
	I1222 00:35:39.775802 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.775810 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:39.775815 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:39.775872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:39.800503 1921402 cri.go:96] found id: ""
	I1222 00:35:39.800523 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.800531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:39.800536 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:39.800595 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:39.829015 1921402 cri.go:96] found id: ""
	I1222 00:35:39.829028 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.829035 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:39.829040 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:39.829096 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:39.856667 1921402 cri.go:96] found id: ""
	I1222 00:35:39.856680 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.856688 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:39.856693 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:39.856752 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:39.881236 1921402 cri.go:96] found id: ""
	I1222 00:35:39.881259 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.881266 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:39.881272 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:39.881365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:39.907774 1921402 cri.go:96] found id: ""
	I1222 00:35:39.907788 1921402 logs.go:282] 0 containers: []
	W1222 00:35:39.907795 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:39.907803 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:39.907814 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:39.973961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:39.973979 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:39.988619 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:39.988633 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:40.060659 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:40.051693   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.052397   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.053956   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.054578   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:40.056071   13657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:40.060671 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:40.060682 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:40.133570 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:40.133590 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:42.668577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:42.678942 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:42.678998 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:42.708217 1921402 cri.go:96] found id: ""
	I1222 00:35:42.708231 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.708237 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:42.708243 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:42.708298 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:42.733470 1921402 cri.go:96] found id: ""
	I1222 00:35:42.733484 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.733491 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:42.733497 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:42.733555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:42.758025 1921402 cri.go:96] found id: ""
	I1222 00:35:42.758039 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.758050 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:42.758056 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:42.758114 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:42.783070 1921402 cri.go:96] found id: ""
	I1222 00:35:42.783084 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.783092 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:42.783097 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:42.783156 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:42.813432 1921402 cri.go:96] found id: ""
	I1222 00:35:42.813447 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.813454 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:42.813459 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:42.813522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:42.841107 1921402 cri.go:96] found id: ""
	I1222 00:35:42.841121 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.841137 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:42.841143 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:42.841208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:42.866521 1921402 cri.go:96] found id: ""
	I1222 00:35:42.866535 1921402 logs.go:282] 0 containers: []
	W1222 00:35:42.866543 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:42.866551 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:42.866562 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:42.933074 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:42.933092 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:42.950000 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:42.950025 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:43.021528 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:43.013282   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.014132   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.015640   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.016111   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:43.017629   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:43.021539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:43.021551 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:43.096285 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:43.096308 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:45.625156 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:45.635680 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:45.635759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:45.665749 1921402 cri.go:96] found id: ""
	I1222 00:35:45.665763 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.665770 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:45.665775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:45.665833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:45.692343 1921402 cri.go:96] found id: ""
	I1222 00:35:45.692357 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.692364 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:45.692370 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:45.692432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:45.721093 1921402 cri.go:96] found id: ""
	I1222 00:35:45.721108 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.721115 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:45.721120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:45.721180 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:45.747092 1921402 cri.go:96] found id: ""
	I1222 00:35:45.747107 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.747114 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:45.747120 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:45.747177 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:45.776676 1921402 cri.go:96] found id: ""
	I1222 00:35:45.776691 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.776698 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:45.776704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:45.776760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:45.802786 1921402 cri.go:96] found id: ""
	I1222 00:35:45.802800 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.802808 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:45.802814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:45.802873 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:45.828248 1921402 cri.go:96] found id: ""
	I1222 00:35:45.828269 1921402 logs.go:282] 0 containers: []
	W1222 00:35:45.828279 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:45.828287 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:45.828299 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:45.844960 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:45.844976 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:45.915742 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:45.907681   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.908311   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.909760   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.910239   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:45.911835   13866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:45.915756 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:45.915766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:45.994270 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:45.994295 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:46.025259 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:46.025275 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:48.596430 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:48.606663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:48.606725 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:48.635341 1921402 cri.go:96] found id: ""
	I1222 00:35:48.635356 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.635363 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:48.635369 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:48.635430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:48.661161 1921402 cri.go:96] found id: ""
	I1222 00:35:48.661184 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.661191 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:48.661196 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:48.661259 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:48.690450 1921402 cri.go:96] found id: ""
	I1222 00:35:48.690464 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.690470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:48.690476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:48.690537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:48.716347 1921402 cri.go:96] found id: ""
	I1222 00:35:48.716363 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.716388 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:48.716409 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:48.716477 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:48.746573 1921402 cri.go:96] found id: ""
	I1222 00:35:48.746588 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.746595 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:48.746600 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:48.746659 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:48.772755 1921402 cri.go:96] found id: ""
	I1222 00:35:48.772770 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.772777 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:48.772783 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:48.772843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:48.799586 1921402 cri.go:96] found id: ""
	I1222 00:35:48.799600 1921402 logs.go:282] 0 containers: []
	W1222 00:35:48.799607 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:48.799615 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:48.799629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:48.860067 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:48.852213   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.852883   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.853939   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.854510   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:48.856073   13965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:48.860076 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:48.860087 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:48.930351 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:48.930371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:48.960079 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:48.960099 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:49.028235 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:49.028254 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.545623 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:51.556102 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:51.556160 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:51.584081 1921402 cri.go:96] found id: ""
	I1222 00:35:51.584095 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.584102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:51.584108 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:51.584166 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:51.608602 1921402 cri.go:96] found id: ""
	I1222 00:35:51.608616 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.608623 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:51.608628 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:51.608684 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:51.638215 1921402 cri.go:96] found id: ""
	I1222 00:35:51.638229 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.638236 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:51.638242 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:51.638300 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:51.673979 1921402 cri.go:96] found id: ""
	I1222 00:35:51.673993 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.674000 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:51.674005 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:51.674062 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:51.703793 1921402 cri.go:96] found id: ""
	I1222 00:35:51.703807 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.703814 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:51.703820 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:51.703879 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:51.731106 1921402 cri.go:96] found id: ""
	I1222 00:35:51.731120 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.731128 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:51.731133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:51.731191 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:51.758738 1921402 cri.go:96] found id: ""
	I1222 00:35:51.758752 1921402 logs.go:282] 0 containers: []
	W1222 00:35:51.758759 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:51.758766 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:51.758776 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:51.788397 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:51.788413 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:51.855218 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:51.855236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:51.870464 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:51.870480 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:51.935854 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:51.927185   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.927877   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.929600   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.930120   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:51.931805   14084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:51.935864 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:51.935874 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:54.506346 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:54.516494 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:54.516555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:54.542861 1921402 cri.go:96] found id: ""
	I1222 00:35:54.542877 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.542884 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:54.542889 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:54.542947 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:54.567393 1921402 cri.go:96] found id: ""
	I1222 00:35:54.567409 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.567417 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:54.567422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:54.567480 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:54.596220 1921402 cri.go:96] found id: ""
	I1222 00:35:54.596234 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.596241 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:54.596246 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:54.596307 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:54.621195 1921402 cri.go:96] found id: ""
	I1222 00:35:54.621210 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.621217 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:54.621223 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:54.621286 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:54.647655 1921402 cri.go:96] found id: ""
	I1222 00:35:54.647669 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.647676 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:54.647681 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:54.647746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:54.673352 1921402 cri.go:96] found id: ""
	I1222 00:35:54.673365 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.673372 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:54.673378 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:54.673434 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:54.702345 1921402 cri.go:96] found id: ""
	I1222 00:35:54.702359 1921402 logs.go:282] 0 containers: []
	W1222 00:35:54.702366 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:54.702373 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:54.702385 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:35:54.729725 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:54.729740 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:54.795556 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:54.795573 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:54.810399 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:54.810415 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:54.873205 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:54.864296   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.865005   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.866876   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.867451   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:54.869170   14190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:54.873215 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:54.873236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.442564 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:35:57.452367 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:35:57.452430 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:35:57.479752 1921402 cri.go:96] found id: ""
	I1222 00:35:57.479765 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.479773 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:35:57.479780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:35:57.479843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:35:57.505262 1921402 cri.go:96] found id: ""
	I1222 00:35:57.505277 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.505286 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:35:57.505291 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:35:57.505367 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:35:57.530750 1921402 cri.go:96] found id: ""
	I1222 00:35:57.530764 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.530771 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:35:57.530777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:35:57.530833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:35:57.557273 1921402 cri.go:96] found id: ""
	I1222 00:35:57.557286 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.557293 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:35:57.557298 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:35:57.557375 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:35:57.589790 1921402 cri.go:96] found id: ""
	I1222 00:35:57.589803 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.589810 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:35:57.589816 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:35:57.589872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:35:57.615560 1921402 cri.go:96] found id: ""
	I1222 00:35:57.615573 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.615580 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:35:57.615585 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:35:57.615647 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:35:57.643950 1921402 cri.go:96] found id: ""
	I1222 00:35:57.643964 1921402 logs.go:282] 0 containers: []
	W1222 00:35:57.643971 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:35:57.643979 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:35:57.643989 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:35:57.710337 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:35:57.710357 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:35:57.725273 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:35:57.725289 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:35:57.792176 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:35:57.782446   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.783810   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.784701   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.785725   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:35:57.786474   14286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:35:57.792186 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:35:57.792202 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:35:57.869558 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:35:57.869578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:00.405062 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:00.432605 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:00.432680 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:00.470863 1921402 cri.go:96] found id: ""
	I1222 00:36:00.470879 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.470886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:00.470892 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:00.470958 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:00.501706 1921402 cri.go:96] found id: ""
	I1222 00:36:00.501723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.501736 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:00.501743 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:00.501825 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:00.531718 1921402 cri.go:96] found id: ""
	I1222 00:36:00.531733 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.531741 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:00.531746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:00.531814 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:00.560495 1921402 cri.go:96] found id: ""
	I1222 00:36:00.560511 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.560518 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:00.560525 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:00.560585 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:00.588202 1921402 cri.go:96] found id: ""
	I1222 00:36:00.588217 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.588225 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:00.588230 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:00.588293 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:00.616313 1921402 cri.go:96] found id: ""
	I1222 00:36:00.616328 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.616335 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:00.616341 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:00.616402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:00.647784 1921402 cri.go:96] found id: ""
	I1222 00:36:00.647799 1921402 logs.go:282] 0 containers: []
	W1222 00:36:00.647806 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:00.647814 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:00.647823 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:00.715096 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:00.715115 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:00.730829 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:00.730845 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:00.797617 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:00.788730   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.789602   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791229   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.791817   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:00.793582   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:00.797630 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:00.797642 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:00.866765 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:00.866784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.396597 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:03.407146 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:03.407208 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:03.433039 1921402 cri.go:96] found id: ""
	I1222 00:36:03.433053 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.433061 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:03.433066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:03.433141 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:03.458706 1921402 cri.go:96] found id: ""
	I1222 00:36:03.458720 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.458727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:03.458732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:03.458789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:03.484510 1921402 cri.go:96] found id: ""
	I1222 00:36:03.484524 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.484531 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:03.484537 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:03.484607 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:03.510384 1921402 cri.go:96] found id: ""
	I1222 00:36:03.510399 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.510407 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:03.510413 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:03.510475 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:03.543748 1921402 cri.go:96] found id: ""
	I1222 00:36:03.543763 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.543770 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:03.543775 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:03.543833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:03.571136 1921402 cri.go:96] found id: ""
	I1222 00:36:03.571150 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.571157 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:03.571162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:03.571219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:03.600085 1921402 cri.go:96] found id: ""
	I1222 00:36:03.600100 1921402 logs.go:282] 0 containers: []
	W1222 00:36:03.600107 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:03.600113 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:03.600124 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:03.664241 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:03.655839   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.656491   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658208   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.658778   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:03.660390   14488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:03.664252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:03.664264 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:03.738233 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:03.738260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:03.773144 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:03.773160 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:03.841880 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:03.841901 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.357571 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:06.368813 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:06.368874 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:06.400278 1921402 cri.go:96] found id: ""
	I1222 00:36:06.400293 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.400301 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:06.400312 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:06.400373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:06.430787 1921402 cri.go:96] found id: ""
	I1222 00:36:06.430801 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.430809 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:06.430814 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:06.430872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:06.455395 1921402 cri.go:96] found id: ""
	I1222 00:36:06.455409 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.455416 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:06.455421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:06.455478 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:06.485881 1921402 cri.go:96] found id: ""
	I1222 00:36:06.485895 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.485902 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:06.485908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:06.485981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:06.512824 1921402 cri.go:96] found id: ""
	I1222 00:36:06.512838 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.512845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:06.512850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:06.512918 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:06.540462 1921402 cri.go:96] found id: ""
	I1222 00:36:06.540477 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.540494 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:06.540500 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:06.540572 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:06.566709 1921402 cri.go:96] found id: ""
	I1222 00:36:06.566723 1921402 logs.go:282] 0 containers: []
	W1222 00:36:06.566731 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:06.566738 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:06.566750 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:06.634309 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:06.634331 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:06.649342 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:06.649359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:06.719108 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:06.709644   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.711082   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.712027   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.713691   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:06.714019   14599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:06.719119 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:06.719129 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:06.791062 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:06.791080 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.321217 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:09.331643 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:09.331702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:09.370112 1921402 cri.go:96] found id: ""
	I1222 00:36:09.370139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.370146 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:09.370152 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:09.370216 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:09.400020 1921402 cri.go:96] found id: ""
	I1222 00:36:09.400041 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.400049 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:09.400055 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:09.400126 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:09.436087 1921402 cri.go:96] found id: ""
	I1222 00:36:09.436114 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.436121 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:09.436126 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:09.436194 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:09.463697 1921402 cri.go:96] found id: ""
	I1222 00:36:09.463711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.463718 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:09.463724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:09.463780 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:09.491471 1921402 cri.go:96] found id: ""
	I1222 00:36:09.491485 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.491492 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:09.491498 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:09.491555 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:09.516866 1921402 cri.go:96] found id: ""
	I1222 00:36:09.516880 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.516903 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:09.516908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:09.516967 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:09.545747 1921402 cri.go:96] found id: ""
	I1222 00:36:09.545761 1921402 logs.go:282] 0 containers: []
	W1222 00:36:09.545768 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:09.545777 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:09.545788 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:09.607682 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:09.599126   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.599834   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.601473   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.602038   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:09.603633   14699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:09.607692 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:09.607702 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:09.682101 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:09.682120 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:09.713866 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:09.713881 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:09.781658 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:09.781677 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.297278 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:12.307636 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:12.307707 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:12.337251 1921402 cri.go:96] found id: ""
	I1222 00:36:12.337266 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.337273 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:12.337278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:12.337373 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:12.371706 1921402 cri.go:96] found id: ""
	I1222 00:36:12.371719 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.371727 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:12.371732 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:12.371789 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:12.411047 1921402 cri.go:96] found id: ""
	I1222 00:36:12.411060 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.411067 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:12.411072 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:12.411132 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:12.435536 1921402 cri.go:96] found id: ""
	I1222 00:36:12.435550 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.435557 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:12.435562 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:12.435623 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:12.461780 1921402 cri.go:96] found id: ""
	I1222 00:36:12.461794 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.461802 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:12.461808 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:12.461872 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:12.487870 1921402 cri.go:96] found id: ""
	I1222 00:36:12.487884 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.487891 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:12.487897 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:12.487954 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:12.513933 1921402 cri.go:96] found id: ""
	I1222 00:36:12.513947 1921402 logs.go:282] 0 containers: []
	W1222 00:36:12.513954 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:12.513961 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:12.513972 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:12.528919 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:12.528934 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:12.596732 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:12.587250   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.587883   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.589568   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.590112   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:12.592244   14813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:12.596742 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:12.596753 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:12.669434 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:12.669452 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:12.702088 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:12.702104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:15.269222 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:15.279437 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:15.279497 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:15.308005 1921402 cri.go:96] found id: ""
	I1222 00:36:15.308019 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.308026 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:15.308034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:15.308104 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:15.335800 1921402 cri.go:96] found id: ""
	I1222 00:36:15.335814 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.335822 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:15.335829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:15.335889 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:15.364041 1921402 cri.go:96] found id: ""
	I1222 00:36:15.364055 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.364064 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:15.364070 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:15.364127 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:15.396634 1921402 cri.go:96] found id: ""
	I1222 00:36:15.396649 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.396657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:15.396663 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:15.396721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:15.427551 1921402 cri.go:96] found id: ""
	I1222 00:36:15.427565 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.427572 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:15.427578 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:15.427640 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:15.462972 1921402 cri.go:96] found id: ""
	I1222 00:36:15.462987 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.462994 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:15.463000 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:15.463057 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:15.488170 1921402 cri.go:96] found id: ""
	I1222 00:36:15.488184 1921402 logs.go:282] 0 containers: []
	W1222 00:36:15.488201 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:15.488208 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:15.488219 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:15.503474 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:15.503499 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:15.576848 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:15.568696   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.569154   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.570855   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.571328   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:15.572897   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:15.576859 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:15.576870 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:15.647239 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:15.647258 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:15.677589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:15.677604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.247711 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:18.259251 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:18.259314 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:18.285041 1921402 cri.go:96] found id: ""
	I1222 00:36:18.285056 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.285063 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:18.285068 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:18.285129 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:18.312429 1921402 cri.go:96] found id: ""
	I1222 00:36:18.312442 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.312449 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:18.312455 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:18.312509 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:18.342758 1921402 cri.go:96] found id: ""
	I1222 00:36:18.342772 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.342779 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:18.342786 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:18.342844 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:18.376484 1921402 cri.go:96] found id: ""
	I1222 00:36:18.376497 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.376505 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:18.376510 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:18.376567 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:18.411956 1921402 cri.go:96] found id: ""
	I1222 00:36:18.411970 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.411987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:18.411992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:18.412059 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:18.441521 1921402 cri.go:96] found id: ""
	I1222 00:36:18.441535 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.441542 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:18.441549 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:18.441606 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:18.466320 1921402 cri.go:96] found id: ""
	I1222 00:36:18.466335 1921402 logs.go:282] 0 containers: []
	W1222 00:36:18.466342 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:18.466349 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:18.466359 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:18.540350 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:18.540370 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:18.555809 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:18.555826 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:18.620233 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:18.612078   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.612476   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.613960   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.615241   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:18.616445   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:18.620247 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:18.620260 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:18.690556 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:18.690578 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.222964 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:21.233019 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:21.233077 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:21.258106 1921402 cri.go:96] found id: ""
	I1222 00:36:21.258120 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.258127 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:21.258133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:21.258190 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:21.282679 1921402 cri.go:96] found id: ""
	I1222 00:36:21.282693 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.282699 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:21.282704 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:21.282761 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:21.311750 1921402 cri.go:96] found id: ""
	I1222 00:36:21.311764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.311789 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:21.311794 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:21.311861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:21.340844 1921402 cri.go:96] found id: ""
	I1222 00:36:21.340858 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.340866 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:21.340871 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:21.340940 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:21.375873 1921402 cri.go:96] found id: ""
	I1222 00:36:21.375898 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.375905 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:21.375910 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:21.375973 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:21.406752 1921402 cri.go:96] found id: ""
	I1222 00:36:21.406767 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.406775 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:21.406780 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:21.406845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:21.430878 1921402 cri.go:96] found id: ""
	I1222 00:36:21.430892 1921402 logs.go:282] 0 containers: []
	W1222 00:36:21.430899 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:21.430906 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:21.430925 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:21.499738 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:21.499757 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:21.532589 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:21.532604 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:21.600165 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:21.600183 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:21.615391 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:21.615407 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:21.684109 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:21.675788   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.676744   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678413   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.678724   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:21.680260   15145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.185197 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:24.196664 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:24.196744 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:24.223518 1921402 cri.go:96] found id: ""
	I1222 00:36:24.223533 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.223540 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:24.223546 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:24.223609 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:24.250130 1921402 cri.go:96] found id: ""
	I1222 00:36:24.250145 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.250152 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:24.250159 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:24.250218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:24.275065 1921402 cri.go:96] found id: ""
	I1222 00:36:24.275079 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.275086 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:24.275091 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:24.275147 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:24.299603 1921402 cri.go:96] found id: ""
	I1222 00:36:24.299617 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.299625 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:24.299630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:24.299688 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:24.325078 1921402 cri.go:96] found id: ""
	I1222 00:36:24.325092 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.325099 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:24.325116 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:24.325173 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:24.357840 1921402 cri.go:96] found id: ""
	I1222 00:36:24.357854 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.357871 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:24.357877 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:24.357933 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:24.384301 1921402 cri.go:96] found id: ""
	I1222 00:36:24.384315 1921402 logs.go:282] 0 containers: []
	W1222 00:36:24.384345 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:24.384356 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:24.384371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:24.455452 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:24.455475 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:24.470577 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:24.470593 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:24.536053 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:24.526996   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.527804   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.529667   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.530226   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:24.531796   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:24.536310 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:24.536325 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:24.615547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:24.615572 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.151479 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:27.161793 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:27.161853 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:27.187096 1921402 cri.go:96] found id: ""
	I1222 00:36:27.187110 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.187117 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:27.187123 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:27.187179 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:27.212527 1921402 cri.go:96] found id: ""
	I1222 00:36:27.212541 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.212548 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:27.212553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:27.212613 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:27.241441 1921402 cri.go:96] found id: ""
	I1222 00:36:27.241455 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.241462 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:27.241467 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:27.241522 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:27.266764 1921402 cri.go:96] found id: ""
	I1222 00:36:27.266778 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.266786 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:27.266791 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:27.266848 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:27.292091 1921402 cri.go:96] found id: ""
	I1222 00:36:27.292105 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.292113 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:27.292118 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:27.292175 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:27.316799 1921402 cri.go:96] found id: ""
	I1222 00:36:27.316813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.316821 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:27.316826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:27.316896 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:27.341725 1921402 cri.go:96] found id: ""
	I1222 00:36:27.341739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:27.341746 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:27.341754 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:27.341764 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:27.363209 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:27.363227 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:27.435344 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:27.427029   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.427893   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.429703   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.430074   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:27.431490   15343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:27.435355 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:27.435367 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:27.505547 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:27.505565 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:27.534380 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:27.534395 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.100897 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:30.112047 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:30.112110 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:30.143007 1921402 cri.go:96] found id: ""
	I1222 00:36:30.143021 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.143028 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:30.143034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:30.143091 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:30.168174 1921402 cri.go:96] found id: ""
	I1222 00:36:30.168188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.168196 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:30.168202 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:30.168270 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:30.198167 1921402 cri.go:96] found id: ""
	I1222 00:36:30.198181 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.198189 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:30.198194 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:30.198254 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:30.228238 1921402 cri.go:96] found id: ""
	I1222 00:36:30.228252 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.228259 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:30.228265 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:30.228326 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:30.252893 1921402 cri.go:96] found id: ""
	I1222 00:36:30.252907 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.252914 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:30.252920 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:30.252981 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:30.282045 1921402 cri.go:96] found id: ""
	I1222 00:36:30.282067 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.282074 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:30.282080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:30.282146 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:30.308008 1921402 cri.go:96] found id: ""
	I1222 00:36:30.308020 1921402 logs.go:282] 0 containers: []
	W1222 00:36:30.308027 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:30.308036 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:30.308047 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:30.380173 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:30.372169   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.372923   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.373945   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.374478   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:30.375979   15437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:30.380183 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:30.380193 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:30.455922 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:30.455942 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:30.489214 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:30.489236 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:30.560024 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:30.560042 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.076015 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:33.086212 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:33.086271 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:33.111816 1921402 cri.go:96] found id: ""
	I1222 00:36:33.111830 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.111837 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:33.111842 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:33.111899 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:33.139689 1921402 cri.go:96] found id: ""
	I1222 00:36:33.139702 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.139709 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:33.139715 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:33.139771 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:33.164725 1921402 cri.go:96] found id: ""
	I1222 00:36:33.164739 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.164746 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:33.164752 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:33.164809 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:33.191640 1921402 cri.go:96] found id: ""
	I1222 00:36:33.191654 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.191661 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:33.191666 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:33.191721 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:33.217604 1921402 cri.go:96] found id: ""
	I1222 00:36:33.217618 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.217625 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:33.217631 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:33.217690 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:33.243174 1921402 cri.go:96] found id: ""
	I1222 00:36:33.243188 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.243195 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:33.243201 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:33.243256 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:33.271929 1921402 cri.go:96] found id: ""
	I1222 00:36:33.271943 1921402 logs.go:282] 0 containers: []
	W1222 00:36:33.271950 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:33.271958 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:33.271968 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:33.300147 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:33.300163 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:33.366779 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:33.366797 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:33.382105 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:33.382122 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:33.445242 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:33.437172   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.438053   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439494   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.439798   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:33.441455   15567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:33.445252 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:33.445261 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:36.014121 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:36.025137 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:36.025200 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:36.051061 1921402 cri.go:96] found id: ""
	I1222 00:36:36.051077 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.051085 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:36.051090 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:36.051154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:36.082445 1921402 cri.go:96] found id: ""
	I1222 00:36:36.082458 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.082466 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:36.082471 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:36.082543 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:36.108654 1921402 cri.go:96] found id: ""
	I1222 00:36:36.108668 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.108676 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:36.108682 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:36.108746 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:36.133581 1921402 cri.go:96] found id: ""
	I1222 00:36:36.133595 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.133602 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:36.133608 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:36.133672 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:36.158297 1921402 cri.go:96] found id: ""
	I1222 00:36:36.158311 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.158318 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:36.158324 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:36.158397 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:36.188437 1921402 cri.go:96] found id: ""
	I1222 00:36:36.188451 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.188458 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:36.188464 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:36.188521 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:36.213711 1921402 cri.go:96] found id: ""
	I1222 00:36:36.213725 1921402 logs.go:282] 0 containers: []
	W1222 00:36:36.213732 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:36.213740 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:36.213752 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:36.241158 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:36.241174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:36.309133 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:36.309152 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:36.323930 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:36.323945 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:36.412954 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:36.404735   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.405249   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.406853   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.407363   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:36.408892   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:36.412965 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:36.412977 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:38.983065 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:38.993162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:38.993219 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:39.019770 1921402 cri.go:96] found id: ""
	I1222 00:36:39.019784 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.019791 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:39.019797 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:39.019854 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:39.044991 1921402 cri.go:96] found id: ""
	I1222 00:36:39.045005 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.045012 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:39.045017 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:39.045074 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:39.070008 1921402 cri.go:96] found id: ""
	I1222 00:36:39.070022 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.070029 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:39.070034 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:39.070107 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:39.094755 1921402 cri.go:96] found id: ""
	I1222 00:36:39.094768 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.094776 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:39.094781 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:39.094837 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:39.120054 1921402 cri.go:96] found id: ""
	I1222 00:36:39.120068 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.120075 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:39.120080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:39.120136 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:39.144698 1921402 cri.go:96] found id: ""
	I1222 00:36:39.144711 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.144719 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:39.144724 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:39.144777 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:39.169125 1921402 cri.go:96] found id: ""
	I1222 00:36:39.169139 1921402 logs.go:282] 0 containers: []
	W1222 00:36:39.169146 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:39.169153 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:39.169162 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:39.235404 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:39.235423 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:39.250312 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:39.250330 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:39.310727 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:39.302689   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.303439   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.304613   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.305172   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:39.306934   15759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:39.310737 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:39.310749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:39.385727 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:39.385745 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:41.917457 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:41.927340 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:41.927400 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:41.952148 1921402 cri.go:96] found id: ""
	I1222 00:36:41.952162 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.952169 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:41.952175 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:41.952231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:41.977527 1921402 cri.go:96] found id: ""
	I1222 00:36:41.977542 1921402 logs.go:282] 0 containers: []
	W1222 00:36:41.977549 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:41.977555 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:41.977625 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:42.005444 1921402 cri.go:96] found id: ""
	I1222 00:36:42.005462 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.005470 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:42.005476 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:42.005551 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:42.041797 1921402 cri.go:96] found id: ""
	I1222 00:36:42.041813 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.041820 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:42.041826 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:42.041888 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:42.069781 1921402 cri.go:96] found id: ""
	I1222 00:36:42.069797 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.069806 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:42.069812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:42.069880 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:42.104984 1921402 cri.go:96] found id: ""
	I1222 00:36:42.105000 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.105009 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:42.105016 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:42.105125 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:42.150604 1921402 cri.go:96] found id: ""
	I1222 00:36:42.150620 1921402 logs.go:282] 0 containers: []
	W1222 00:36:42.150629 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:42.150637 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:42.150649 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:42.223375 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:42.223400 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:42.242775 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:42.242802 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:42.318351 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:42.309198   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.310250   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.311864   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.312509   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:42.314027   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:42.318361 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:42.318371 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:42.393293 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:42.393311 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:44.928982 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:44.938908 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:44.938968 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:44.963638 1921402 cri.go:96] found id: ""
	I1222 00:36:44.963652 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.963659 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:44.963665 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:44.963723 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:44.988816 1921402 cri.go:96] found id: ""
	I1222 00:36:44.988831 1921402 logs.go:282] 0 containers: []
	W1222 00:36:44.988838 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:44.988843 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:44.988908 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:45.056183 1921402 cri.go:96] found id: ""
	I1222 00:36:45.056202 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.056212 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:45.056218 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:45.056311 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:45.094388 1921402 cri.go:96] found id: ""
	I1222 00:36:45.094407 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.094416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:45.094422 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:45.094503 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:45.141522 1921402 cri.go:96] found id: ""
	I1222 00:36:45.141539 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.141547 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:45.141554 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:45.141632 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:45.184485 1921402 cri.go:96] found id: ""
	I1222 00:36:45.184517 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.184539 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:45.184545 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:45.184691 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:45.244764 1921402 cri.go:96] found id: ""
	I1222 00:36:45.244780 1921402 logs.go:282] 0 containers: []
	W1222 00:36:45.244789 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:45.244799 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:45.244810 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:45.323564 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:45.323584 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:45.341732 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:45.341749 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:45.426921 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:45.417511   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.418211   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.419957   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.420466   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:45.422429   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:45.426931 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:45.426941 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:45.496872 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:45.496897 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:48.027828 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:48.038486 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:48.038549 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:48.063156 1921402 cri.go:96] found id: ""
	I1222 00:36:48.063170 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.063177 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:48.063183 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:48.063242 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:48.092751 1921402 cri.go:96] found id: ""
	I1222 00:36:48.092764 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.092771 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:48.092776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:48.092833 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:48.117758 1921402 cri.go:96] found id: ""
	I1222 00:36:48.117771 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.117778 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:48.117784 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:48.117845 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:48.156671 1921402 cri.go:96] found id: ""
	I1222 00:36:48.156685 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.156692 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:48.156697 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:48.156759 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:48.181935 1921402 cri.go:96] found id: ""
	I1222 00:36:48.181948 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.181961 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:48.181966 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:48.182025 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:48.206454 1921402 cri.go:96] found id: ""
	I1222 00:36:48.206468 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.206476 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:48.206481 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:48.206537 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:48.230655 1921402 cri.go:96] found id: ""
	I1222 00:36:48.230669 1921402 logs.go:282] 0 containers: []
	W1222 00:36:48.230676 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:48.230684 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:48.230694 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:48.297753 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:48.297772 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:48.312560 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:48.312575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:48.405529 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:48.396998   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.397716   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399109   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.399746   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:48.401379   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:48.405539 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:48.405550 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:48.479216 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:48.479235 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.006751 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:51.017983 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:51.018043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:51.045081 1921402 cri.go:96] found id: ""
	I1222 00:36:51.045095 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.045102 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:51.045133 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:51.045231 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:51.072909 1921402 cri.go:96] found id: ""
	I1222 00:36:51.072923 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.072931 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:51.072936 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:51.073002 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:51.101558 1921402 cri.go:96] found id: ""
	I1222 00:36:51.101582 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.101590 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:51.101596 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:51.101667 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:51.128728 1921402 cri.go:96] found id: ""
	I1222 00:36:51.128762 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.128771 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:51.128777 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:51.128843 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:51.155107 1921402 cri.go:96] found id: ""
	I1222 00:36:51.155123 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.155129 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:51.155135 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:51.155196 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:51.183281 1921402 cri.go:96] found id: ""
	I1222 00:36:51.183295 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.183302 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:51.183308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:51.183365 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:51.209022 1921402 cri.go:96] found id: ""
	I1222 00:36:51.209050 1921402 logs.go:282] 0 containers: []
	W1222 00:36:51.209057 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:51.209078 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:51.209088 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:51.272611 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:51.264496   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.265088   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.266719   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.267234   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:51.268804   16174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:51.272624 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:51.272634 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:51.342544 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:51.342563 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:51.373712 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:51.373727 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:51.452392 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:51.452419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:53.968915 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:53.979167 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:53.979236 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:54.009488 1921402 cri.go:96] found id: ""
	I1222 00:36:54.009513 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.009521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:54.009527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:54.009592 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:54.036904 1921402 cri.go:96] found id: ""
	I1222 00:36:54.036919 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.036927 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:54.036932 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:54.036991 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:54.065561 1921402 cri.go:96] found id: ""
	I1222 00:36:54.065576 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.065583 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:54.065588 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:54.065651 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:54.092446 1921402 cri.go:96] found id: ""
	I1222 00:36:54.092472 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.092479 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:54.092485 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:54.092554 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:54.124282 1921402 cri.go:96] found id: ""
	I1222 00:36:54.124296 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.124303 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:54.124308 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:54.124376 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:54.149069 1921402 cri.go:96] found id: ""
	I1222 00:36:54.149082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.149089 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:54.149095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:54.149150 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:54.173952 1921402 cri.go:96] found id: ""
	I1222 00:36:54.173966 1921402 logs.go:282] 0 containers: []
	W1222 00:36:54.173973 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:54.173985 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:54.173996 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:54.243687 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:54.243707 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:54.274613 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:54.274629 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:54.347248 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:54.347266 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:54.368410 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:54.368429 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:54.442889 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:54.435376   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.435813   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437040   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.437607   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:54.439090   16303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:56.943148 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:56.953207 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:56.953265 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:56.980068 1921402 cri.go:96] found id: ""
	I1222 00:36:56.980082 1921402 logs.go:282] 0 containers: []
	W1222 00:36:56.980089 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:56.980095 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:56.980154 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:57.009964 1921402 cri.go:96] found id: ""
	I1222 00:36:57.009979 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.009987 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:57.009992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:57.010056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:36:57.040772 1921402 cri.go:96] found id: ""
	I1222 00:36:57.040788 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.040796 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:36:57.040801 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:36:57.040861 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:36:57.066436 1921402 cri.go:96] found id: ""
	I1222 00:36:57.066449 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.066456 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:36:57.066461 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:36:57.066518 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:36:57.090966 1921402 cri.go:96] found id: ""
	I1222 00:36:57.090980 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.090987 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:36:57.090992 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:36:57.091052 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:36:57.120134 1921402 cri.go:96] found id: ""
	I1222 00:36:57.120149 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.120156 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:36:57.120162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:36:57.120218 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:36:57.145014 1921402 cri.go:96] found id: ""
	I1222 00:36:57.145027 1921402 logs.go:282] 0 containers: []
	W1222 00:36:57.145034 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:36:57.145041 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:36:57.145052 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:36:57.214243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:36:57.214263 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:36:57.248369 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:36:57.248387 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:36:57.316316 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:36:57.316337 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:36:57.331296 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:36:57.331313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:36:57.423941 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:36:57.415708   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.416209   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.417860   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.418329   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:36:57.419836   16403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:36:59.924191 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:36:59.934553 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:36:59.934611 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:36:59.963952 1921402 cri.go:96] found id: ""
	I1222 00:36:59.963967 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.963974 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:36:59.963980 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:36:59.964039 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:36:59.989241 1921402 cri.go:96] found id: ""
	I1222 00:36:59.989254 1921402 logs.go:282] 0 containers: []
	W1222 00:36:59.989261 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:36:59.989267 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:36:59.989346 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:00.041618 1921402 cri.go:96] found id: ""
	I1222 00:37:00.041645 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.041653 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:00.041659 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:00.041743 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:00.156035 1921402 cri.go:96] found id: ""
	I1222 00:37:00.156051 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.156058 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:00.156066 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:00.156135 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:00.209590 1921402 cri.go:96] found id: ""
	I1222 00:37:00.209625 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.209642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:00.209658 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:00.209760 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:00.265952 1921402 cri.go:96] found id: ""
	I1222 00:37:00.265968 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.265976 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:00.265982 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:00.266053 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:00.306640 1921402 cri.go:96] found id: ""
	I1222 00:37:00.306669 1921402 logs.go:282] 0 containers: []
	W1222 00:37:00.306678 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:00.306692 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:00.306705 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:00.391041 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:00.391062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:00.408447 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:00.408463 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:00.477856 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:00.469060   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.469638   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.471535   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.472037   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:00.473728   16497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:00.477866 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:00.477876 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:00.550139 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:00.550158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:03.081702 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:03.091652 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:03.091709 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:03.116220 1921402 cri.go:96] found id: ""
	I1222 00:37:03.116234 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.116241 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:03.116247 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:03.116303 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:03.141299 1921402 cri.go:96] found id: ""
	I1222 00:37:03.141313 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.141340 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:03.141345 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:03.141402 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:03.166615 1921402 cri.go:96] found id: ""
	I1222 00:37:03.166630 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.166637 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:03.166642 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:03.166706 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:03.196327 1921402 cri.go:96] found id: ""
	I1222 00:37:03.196341 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.196358 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:03.196366 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:03.196432 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:03.221691 1921402 cri.go:96] found id: ""
	I1222 00:37:03.221705 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.221712 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:03.221717 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:03.221774 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:03.247455 1921402 cri.go:96] found id: ""
	I1222 00:37:03.247470 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.247477 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:03.247482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:03.247540 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:03.277376 1921402 cri.go:96] found id: ""
	I1222 00:37:03.277398 1921402 logs.go:282] 0 containers: []
	W1222 00:37:03.277406 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:03.277414 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:03.277424 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:03.345137 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:03.345158 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:03.362155 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:03.362174 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:03.438149 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:03.429891   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.430725   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.431629   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.433223   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:03.434022   16603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:03.438160 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:03.438172 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:03.508243 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:03.508262 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.037621 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:06.048285 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:06.048348 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:06.076896 1921402 cri.go:96] found id: ""
	I1222 00:37:06.076911 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.076918 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:06.076924 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:06.076985 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:06.102650 1921402 cri.go:96] found id: ""
	I1222 00:37:06.102665 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.102671 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:06.102677 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:06.102732 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:06.127851 1921402 cri.go:96] found id: ""
	I1222 00:37:06.127865 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.127872 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:06.127878 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:06.127934 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:06.154598 1921402 cri.go:96] found id: ""
	I1222 00:37:06.154612 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.154619 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:06.154625 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:06.154683 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:06.181128 1921402 cri.go:96] found id: ""
	I1222 00:37:06.181142 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.181149 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:06.181155 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:06.181215 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:06.207749 1921402 cri.go:96] found id: ""
	I1222 00:37:06.207762 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.207770 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:06.207776 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:06.207832 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:06.240273 1921402 cri.go:96] found id: ""
	I1222 00:37:06.240287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:06.240295 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:06.240302 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:06.240313 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:06.310353 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:06.310372 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:06.337909 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:06.337928 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:06.413399 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:06.413419 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:06.428074 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:06.428091 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:06.491546 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:06.483115   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.484395   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.485866   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.486184   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:06.487595   16721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:08.991781 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:09.003482 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:09.003561 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:09.030801 1921402 cri.go:96] found id: ""
	I1222 00:37:09.030816 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.030823 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:09.030829 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:09.030884 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:09.063705 1921402 cri.go:96] found id: ""
	I1222 00:37:09.063728 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.063740 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:09.063746 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:09.063819 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:09.094354 1921402 cri.go:96] found id: ""
	I1222 00:37:09.094370 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.094378 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:09.094384 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:09.094441 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:09.120692 1921402 cri.go:96] found id: ""
	I1222 00:37:09.120706 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.120714 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:09.120719 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:09.120776 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:09.146623 1921402 cri.go:96] found id: ""
	I1222 00:37:09.146642 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.146649 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:09.146654 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:09.146711 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:09.176276 1921402 cri.go:96] found id: ""
	I1222 00:37:09.176290 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.176297 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:09.176302 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:09.176359 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:09.205092 1921402 cri.go:96] found id: ""
	I1222 00:37:09.205106 1921402 logs.go:282] 0 containers: []
	W1222 00:37:09.205113 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:09.205120 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:09.205130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:09.271826 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:09.271851 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:09.287713 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:09.287729 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:09.372451 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:09.363564   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.364517   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366322   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.366665   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:09.368181   16809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:09.372461 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:09.372474 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:09.456787 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:09.456808 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:11.988321 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:11.998626 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:11.998682 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:12.034498 1921402 cri.go:96] found id: ""
	I1222 00:37:12.034513 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.034521 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:12.034527 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:12.034588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:12.065008 1921402 cri.go:96] found id: ""
	I1222 00:37:12.065023 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.065033 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:12.065038 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:12.065098 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:12.092442 1921402 cri.go:96] found id: ""
	I1222 00:37:12.092456 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.092463 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:12.092469 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:12.092529 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:12.118395 1921402 cri.go:96] found id: ""
	I1222 00:37:12.118408 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.118416 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:12.118421 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:12.118476 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:12.143621 1921402 cri.go:96] found id: ""
	I1222 00:37:12.143634 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.143642 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:12.143647 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:12.143702 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:12.172923 1921402 cri.go:96] found id: ""
	I1222 00:37:12.172937 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.172944 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:12.172949 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:12.173004 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:12.199794 1921402 cri.go:96] found id: ""
	I1222 00:37:12.199808 1921402 logs.go:282] 0 containers: []
	W1222 00:37:12.199815 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:12.199822 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:12.199833 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:12.263467 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:12.255080   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.255523   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257075   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.257671   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:12.259356   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:12.263477 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:12.263488 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:12.334334 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:12.334358 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:12.374086 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:12.374104 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:12.447671 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:12.447690 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:14.962865 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:14.972963 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:14.973024 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:14.998158 1921402 cri.go:96] found id: ""
	I1222 00:37:14.998172 1921402 logs.go:282] 0 containers: []
	W1222 00:37:14.998179 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:14.998185 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:14.998243 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:15.029851 1921402 cri.go:96] found id: ""
	I1222 00:37:15.029867 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.029874 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:15.029882 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:15.029948 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:15.057647 1921402 cri.go:96] found id: ""
	I1222 00:37:15.057662 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.057668 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:15.057674 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:15.057742 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:15.084111 1921402 cri.go:96] found id: ""
	I1222 00:37:15.084126 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.084134 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:15.084140 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:15.084205 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:15.114963 1921402 cri.go:96] found id: ""
	I1222 00:37:15.114978 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.114984 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:15.114990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:15.115050 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:15.140108 1921402 cri.go:96] found id: ""
	I1222 00:37:15.140122 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.140129 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:15.140134 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:15.140192 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:15.169273 1921402 cri.go:96] found id: ""
	I1222 00:37:15.169287 1921402 logs.go:282] 0 containers: []
	W1222 00:37:15.169294 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:15.169302 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:15.169312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:15.235722 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:15.235741 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:15.250646 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:15.250662 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:15.315252 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:15.306828   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.307685   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309165   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.309676   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:15.311367   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:15.315262 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:15.315272 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:15.390764 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:15.390784 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:17.922554 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:17.932679 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:17.932739 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:17.957865 1921402 cri.go:96] found id: ""
	I1222 00:37:17.957879 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.957886 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:17.957891 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:17.957952 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:17.987323 1921402 cri.go:96] found id: ""
	I1222 00:37:17.987337 1921402 logs.go:282] 0 containers: []
	W1222 00:37:17.987344 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:17.987349 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:17.987410 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:18.017446 1921402 cri.go:96] found id: ""
	I1222 00:37:18.017461 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.017469 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:18.017474 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:18.017538 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:18.043105 1921402 cri.go:96] found id: ""
	I1222 00:37:18.043119 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.043126 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:18.043131 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:18.043188 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:18.069214 1921402 cri.go:96] found id: ""
	I1222 00:37:18.069229 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.069236 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:18.069241 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:18.069304 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:18.094585 1921402 cri.go:96] found id: ""
	I1222 00:37:18.094598 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.094605 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:18.094610 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:18.094670 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:18.124271 1921402 cri.go:96] found id: ""
	I1222 00:37:18.124292 1921402 logs.go:282] 0 containers: []
	W1222 00:37:18.124300 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:18.124307 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:18.124318 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:18.191628 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:18.191645 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:18.206373 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:18.206389 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:18.271908 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:18.263844   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.264495   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.265977   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.266531   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:18.267971   17125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:18.271918 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:18.271931 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:18.347020 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:18.347041 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:20.886577 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:20.896630 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:20.896687 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:20.924710 1921402 cri.go:96] found id: ""
	I1222 00:37:20.924724 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.924731 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:20.924737 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:20.924797 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:20.949929 1921402 cri.go:96] found id: ""
	I1222 00:37:20.949944 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.949951 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:20.949956 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:20.950015 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:20.975241 1921402 cri.go:96] found id: ""
	I1222 00:37:20.975254 1921402 logs.go:282] 0 containers: []
	W1222 00:37:20.975262 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:20.975273 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:20.975332 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:21.007134 1921402 cri.go:96] found id: ""
	I1222 00:37:21.007149 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.007157 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:21.007162 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:21.007226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:21.036823 1921402 cri.go:96] found id: ""
	I1222 00:37:21.036837 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.036845 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:21.036850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:21.036922 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:21.066186 1921402 cri.go:96] found id: ""
	I1222 00:37:21.066201 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.066208 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:21.066214 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:21.066273 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:21.092594 1921402 cri.go:96] found id: ""
	I1222 00:37:21.092609 1921402 logs.go:282] 0 containers: []
	W1222 00:37:21.092616 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:21.092624 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:21.092658 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:21.155743 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:21.147229   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.148232   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.149047   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150532   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:21.150962   17224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:21.155753 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:21.155763 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:21.230114 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:21.230133 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:21.258046 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:21.258062 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:21.330592 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:21.330619 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:23.847823 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:23.857930 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:37:23.857993 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:37:23.883272 1921402 cri.go:96] found id: ""
	I1222 00:37:23.883286 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.883294 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:37:23.883300 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:37:23.883355 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:37:23.907502 1921402 cri.go:96] found id: ""
	I1222 00:37:23.907515 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.907523 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:37:23.907528 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:37:23.907588 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:37:23.932684 1921402 cri.go:96] found id: ""
	I1222 00:37:23.932698 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.932705 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:37:23.932711 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:37:23.932769 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:37:23.957636 1921402 cri.go:96] found id: ""
	I1222 00:37:23.957650 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.957657 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:37:23.957662 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:37:23.957719 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:37:23.982476 1921402 cri.go:96] found id: ""
	I1222 00:37:23.982490 1921402 logs.go:282] 0 containers: []
	W1222 00:37:23.982497 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:37:23.982502 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:37:23.982560 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:37:24.014243 1921402 cri.go:96] found id: ""
	I1222 00:37:24.014258 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.014265 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:37:24.014278 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:37:24.014341 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:37:24.041726 1921402 cri.go:96] found id: ""
	I1222 00:37:24.041740 1921402 logs.go:282] 0 containers: []
	W1222 00:37:24.041748 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:37:24.041756 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:37:24.041766 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:37:24.110293 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:37:24.110312 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:37:24.126712 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:37:24.126736 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:37:24.190552 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:37:24.181228   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.181964   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.183629   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.184353   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:37:24.186631   17335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:37:24.190564 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:37:24.190575 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:37:24.262140 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:37:24.262165 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:37:26.797351 1921402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:37:26.808692 1921402 kubeadm.go:602] duration metric: took 4m2.99406476s to restartPrimaryControlPlane
	W1222 00:37:26.808754 1921402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:37:26.808829 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:37:27.218865 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:37:27.231651 1921402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:37:27.239390 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:37:27.239447 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:37:27.246990 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:37:27.247000 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:37:27.247052 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:37:27.254602 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:37:27.254657 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:37:27.261917 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:37:27.269756 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:37:27.269819 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:37:27.277175 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.285102 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:37:27.285164 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:37:27.292582 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:37:27.300147 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:37:27.300201 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:37:27.307155 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:37:27.346116 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:37:27.346165 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:37:27.414020 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:37:27.414089 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:37:27.414123 1921402 kubeadm.go:319] OS: Linux
	I1222 00:37:27.414168 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:37:27.414215 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:37:27.414261 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:37:27.414307 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:37:27.414354 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:37:27.414404 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:37:27.414448 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:37:27.414494 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:37:27.414538 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:37:27.481436 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:37:27.481554 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:37:27.481674 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:37:27.489751 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:37:27.493685 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:37:27.493793 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:37:27.493881 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:37:27.493967 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:37:27.494032 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:37:27.494108 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:37:27.494164 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:37:27.494230 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:37:27.494294 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:37:27.494372 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:37:27.494448 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:37:27.494487 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:37:27.494545 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:37:27.957235 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:37:28.242322 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:37:28.521612 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:37:28.976211 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:37:29.260315 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:37:29.260957 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:37:29.263905 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:37:29.266956 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:37:29.267051 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:37:29.267139 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:37:29.268296 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:37:29.283008 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:37:29.283332 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:37:29.290939 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:37:29.291397 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:37:29.291445 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:37:29.423121 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:37:29.423418 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:41:29.424266 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00111221s
	I1222 00:41:29.424291 1921402 kubeadm.go:319] 
	I1222 00:41:29.424347 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:41:29.424379 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:41:29.424483 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:41:29.424487 1921402 kubeadm.go:319] 
	I1222 00:41:29.424591 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:41:29.424622 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:41:29.424652 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:41:29.424655 1921402 kubeadm.go:319] 
	I1222 00:41:29.429048 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:41:29.429508 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:41:29.429617 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:41:29.429851 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:41:29.429855 1921402 kubeadm.go:319] 
	I1222 00:41:29.429925 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:41:29.430059 1921402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00111221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:41:29.430143 1921402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 00:41:29.837125 1921402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:41:29.849742 1921402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:41:29.849801 1921402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:41:29.857510 1921402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:41:29.857520 1921402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:41:29.857569 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:41:29.865043 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:41:29.865100 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:41:29.872214 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:41:29.879983 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:41:29.880037 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:41:29.887891 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.895656 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:41:29.895711 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:41:29.902944 1921402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:41:29.910473 1921402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:41:29.910531 1921402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:41:29.917929 1921402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:41:29.958087 1921402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:41:29.958157 1921402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:41:30.050415 1921402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:41:30.050500 1921402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:41:30.050542 1921402 kubeadm.go:319] OS: Linux
	I1222 00:41:30.050625 1921402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:41:30.050679 1921402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:41:30.050743 1921402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:41:30.050790 1921402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:41:30.050861 1921402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:41:30.050918 1921402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:41:30.050963 1921402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:41:30.051010 1921402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:41:30.051055 1921402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:41:30.118501 1921402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:41:30.118617 1921402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:41:30.118712 1921402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:41:30.125936 1921402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:41:30.127797 1921402 out.go:252]   - Generating certificates and keys ...
	I1222 00:41:30.127893 1921402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:41:30.129701 1921402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:41:30.129794 1921402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:41:30.129856 1921402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:41:30.129967 1921402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:41:30.130025 1921402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:41:30.130096 1921402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:41:30.130183 1921402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:41:30.130272 1921402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:41:30.130344 1921402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:41:30.130391 1921402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:41:30.130455 1921402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:41:30.344018 1921402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:41:30.899259 1921402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:41:31.157696 1921402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:41:31.268170 1921402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:41:31.664677 1921402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:41:31.665563 1921402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:41:31.668250 1921402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:41:31.669734 1921402 out.go:252]   - Booting up control plane ...
	I1222 00:41:31.669834 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:41:31.670177 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:41:31.671652 1921402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:41:31.687500 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:41:31.687601 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:41:31.694901 1921402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:41:31.695472 1921402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:41:31.695535 1921402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:41:31.830593 1921402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:41:31.830722 1921402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:45:31.830877 1921402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001320631s
	I1222 00:45:31.831182 1921402 kubeadm.go:319] 
	I1222 00:45:31.831295 1921402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:45:31.831352 1921402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:45:31.831682 1921402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:45:31.831690 1921402 kubeadm.go:319] 
	I1222 00:45:31.831881 1921402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:45:31.831936 1921402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:45:31.831990 1921402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:45:31.831994 1921402 kubeadm.go:319] 
	I1222 00:45:31.836486 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:45:31.837376 1921402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:45:31.837502 1921402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:45:31.837782 1921402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:45:31.837816 1921402 kubeadm.go:319] 
	I1222 00:45:31.837891 1921402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:45:31.837949 1921402 kubeadm.go:403] duration metric: took 12m8.05957574s to StartCluster
	I1222 00:45:31.837990 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:45:31.838056 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:45:31.866824 1921402 cri.go:96] found id: ""
	I1222 00:45:31.866837 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.866844 1921402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:45:31.866850 1921402 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 00:45:31.866906 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:45:31.892772 1921402 cri.go:96] found id: ""
	I1222 00:45:31.892797 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.892805 1921402 logs.go:284] No container was found matching "etcd"
	I1222 00:45:31.892812 1921402 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 00:45:31.892881 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:45:31.919961 1921402 cri.go:96] found id: ""
	I1222 00:45:31.919975 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.919982 1921402 logs.go:284] No container was found matching "coredns"
	I1222 00:45:31.919987 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:45:31.920043 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:45:31.945683 1921402 cri.go:96] found id: ""
	I1222 00:45:31.945697 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.945704 1921402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:45:31.945710 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:45:31.945770 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:45:31.971052 1921402 cri.go:96] found id: ""
	I1222 00:45:31.971066 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.971074 1921402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:45:31.971080 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:45:31.971139 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:45:31.997143 1921402 cri.go:96] found id: ""
	I1222 00:45:31.997158 1921402 logs.go:282] 0 containers: []
	W1222 00:45:31.997165 1921402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:45:31.997170 1921402 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 00:45:31.997226 1921402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:45:32.024698 1921402 cri.go:96] found id: ""
	I1222 00:45:32.024713 1921402 logs.go:282] 0 containers: []
	W1222 00:45:32.024721 1921402 logs.go:284] No container was found matching "kindnet"
	I1222 00:45:32.024729 1921402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:45:32.024739 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:45:32.092110 1921402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:45:32.092130 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:45:32.109516 1921402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:45:32.109532 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:45:32.182932 1921402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:45:32.174874   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.175432   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.176969   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.177520   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:32.179123   21133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:45:32.182942 1921402 logs.go:123] Gathering logs for CRI-O ...
	I1222 00:45:32.182952 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 00:45:32.255366 1921402 logs.go:123] Gathering logs for container status ...
	I1222 00:45:32.255386 1921402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:45:32.288418 1921402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:45:32.288449 1921402 out.go:285] * 
	W1222 00:45:32.288511 1921402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.288527 1921402 out.go:285] * 
	W1222 00:45:32.290657 1921402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:45:32.295098 1921402 out.go:203] 
	W1222 00:45:32.297043 1921402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001320631s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:45:32.297089 1921402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:45:32.297111 1921402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:45:32.299905 1921402 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 00:33:22 functional-837733 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.485049198Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=f8b31a28-1e5e-4924-b750-a0efe7effecd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.485769225Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=96015691-150e-4012-8e1b-65198bb7cc23 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.486269641Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=155cce42-29f5-4d02-81fe-382aac99f970 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.486755321Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=3cd990c1-1fbc-47fd-a924-3e1fddfaba39 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.487198967Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=fb5de215-5f60-4f7a-a012-72665a4d2b87 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.48764071Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=88930c08-aae3-4b3f-862d-918e893427ff name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:37:27 functional-837733 crio[9971]: time="2025-12-22T00:37:27.488127998Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=0b18c52c-a177-41d0-84f8-a99061f82c45 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.121686935Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=a65ca03c-da33-4282-bf75-37afebaaec0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122407537Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=3242b675-c84d-4560-8bb8-f6d771881c0b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.122951619Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=23242d7e-419c-42f0-8bef-ee98feafd2e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123417804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5531d018-c2c9-45be-a1ce-0187a1f831ac name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.123836761Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=10e6d9d2-6073-425b-bed1-1e668d53ee0c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124244175Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c2632b93-d501-48e1-9830-ac88cf5004de name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:41:30 functional-837733 crio[9971]: time="2025-12-22T00:41:30.124649086Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8dce6ea5-b89a-4098-8ce9-aea233a27c3e name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964031496Z" level=info msg="Checking image status: kicbase/echo-server:functional-837733" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964252675Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964296825Z" level=info msg="Image kicbase/echo-server:functional-837733 not found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:40 functional-837733 crio[9971]: time="2025-12-22T00:45:40.964375806Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-837733 found" id=e28a1bba-faa0-4f85-be96-0016cbb29746 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005620606Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-837733" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005780347Z" level=info msg="Image docker.io/kicbase/echo-server:functional-837733 not found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.005818057Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-837733 found" id=fb2bb655-4c3f-4994-836e-03c53dc427b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040267534Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-837733" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040399911Z" level=info msg="Image localhost/kicbase/echo-server:functional-837733 not found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 00:45:41 functional-837733 crio[9971]: time="2025-12-22T00:45:41.040434683Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-837733 found" id=48f50940-7741-4e03-85ef-d7952e22b22f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:45:42.464835   21904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:42.465813   21904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:42.467354   21904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:42.467655   21904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:45:42.469081   21904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 22:05] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 00:09] overlayfs: idmapped layers are currently not supported
	[Dec22 00:15] overlayfs: idmapped layers are currently not supported
	[Dec22 00:16] overlayfs: idmapped layers are currently not supported
	[Dec22 00:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:45:42 up 1 day,  7:28,  0 user,  load average: 0.88, 0.39, 0.53
	Linux functional-837733 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:45:39 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:40 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 22 00:45:40 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:40 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:40 functional-837733 kubelet[21702]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:40 functional-837733 kubelet[21702]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:40 functional-837733 kubelet[21702]: E1222 00:45:40.431231   21702 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:40 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:40 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:41 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 332.
	Dec 22 00:45:41 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:41 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:41 functional-837733 kubelet[21778]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:41 functional-837733 kubelet[21778]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:41 functional-837733 kubelet[21778]: E1222 00:45:41.177311   21778 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:41 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:41 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:45:41 functional-837733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 333.
	Dec 22 00:45:41 functional-837733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:41 functional-837733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:45:41 functional-837733 kubelet[21820]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:41 functional-837733 kubelet[21820]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 00:45:41 functional-837733 kubelet[21820]: E1222 00:45:41.955450   21820 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:45:41 functional-837733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:45:41 functional-837733 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-837733 -n functional-837733: exit status 2 (442.872974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-837733" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (3.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-837733 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-837733 create deployment hello-node --image kicbase/echo-server: exit status 1 (82.600811ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-837733 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 service list: exit status 103 (313.784378ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-837733 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-837733"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-837733 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-837733 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-837733\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 service list -o json: exit status 103 (332.761276ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-837733 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-837733"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-837733 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 service --namespace=default --https --url hello-node: exit status 103 (310.207845ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-837733 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-837733"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-837733 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 service hello-node --url --format={{.IP}}: exit status 103 (361.228566ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-837733 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-837733"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-837733 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-837733 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-837733\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 service hello-node --url: exit status 103 (450.063347ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-837733 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-837733"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-837733 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-837733 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-837733"
functional_test.go:1579: failed to parse "* The control-plane node functional-837733 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-837733\"": parse "* The control-plane node functional-837733 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-837733\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1222 00:45:47.535642 1936372 out.go:360] Setting OutFile to fd 1 ...
I1222 00:45:47.535853 1936372 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:45:47.535887 1936372 out.go:374] Setting ErrFile to fd 2...
I1222 00:45:47.535906 1936372 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:45:47.536187 1936372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:45:47.536486 1936372 mustload.go:66] Loading cluster: functional-837733
I1222 00:45:47.536950 1936372 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:45:47.537475 1936372 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
I1222 00:45:47.554765 1936372 host.go:66] Checking if "functional-837733" exists ...
I1222 00:45:47.555067 1936372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 00:45:47.676656 1936372 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:45:47.666525332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1222 00:45:47.676774 1936372 api_server.go:166] Checking apiserver status ...
I1222 00:45:47.676821 1936372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1222 00:45:47.676858 1936372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
I1222 00:45:47.720190 1936372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
W1222 00:45:47.834940 1936372 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1222 00:45:47.841414 1936372 out.go:179] * The control-plane node functional-837733 apiserver is not running: (state=Stopped)
I1222 00:45:47.844442 1936372 out.go:179]   To start a cluster, run: "minikube start -p functional-837733"

                                                
                                                
stdout: * The control-plane node functional-837733 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-837733"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1936371: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-837733 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-837733 apply -f testdata/testsvc.yaml: exit status 1 (122.100966ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-837733 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (119.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.109.136.175": Temporary Error: Get "http://10.109.136.175": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-837733 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-837733 get svc nginx-svc: exit status 1 (64.463574ms)

                                                
                                                
** stderr ** 
	E1222 00:47:47.484277 1937783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.485711 1937783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.487149 1937783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.488614 1937783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1222 00:47:47.490011 1937783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-837733 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (119.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766364474200773490" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766364474200773490" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766364474200773490" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001/test-1766364474200773490
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.276191ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 00:47:54.520322 1881219 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 22 00:47 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 22 00:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 22 00:47 test-1766364474200773490
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh cat /mount-9p/test-1766364474200773490
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-837733 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) Non-zero exit: kubectl --context functional-837733 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (55.927576ms)

                                                
                                                
** stderr ** 
	E1222 00:47:55.711935 1939245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:151: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-837733 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:81: "TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:82: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:82: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (253.880937ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=39323)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 22 00:47 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 22 00:47 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 22 00:47 test-1766364474200773490
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:84: debugging command "out/minikube-linux-arm64 -p functional-837733 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:95: (dbg) [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:39323
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:95: (dbg) [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001:/mount-9p --alsologtostderr -v=1] stderr:
I1222 00:47:54.262711 1938910 out.go:360] Setting OutFile to fd 1 ...
I1222 00:47:54.263090 1938910 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:47:54.263104 1938910 out.go:374] Setting ErrFile to fd 2...
I1222 00:47:54.263110 1938910 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:47:54.263362 1938910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:47:54.263603 1938910 mustload.go:66] Loading cluster: functional-837733
I1222 00:47:54.263939 1938910 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:47:54.264429 1938910 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
I1222 00:47:54.286274 1938910 host.go:66] Checking if "functional-837733" exists ...
I1222 00:47:54.286563 1938910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 00:47:54.375689 1938910 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:47:54.366761295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1222 00:47:54.375856 1938910 cli_runner.go:164] Run: docker network inspect functional-837733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1222 00:47:54.400156 1938910 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001 into VM as /mount-9p ...
I1222 00:47:54.403138 1938910 out.go:179]   - Mount type:   9p
I1222 00:47:54.406012 1938910 out.go:179]   - User ID:      docker
I1222 00:47:54.408740 1938910 out.go:179]   - Group ID:     docker
I1222 00:47:54.412060 1938910 out.go:179]   - Version:      9p2000.L
I1222 00:47:54.415050 1938910 out.go:179]   - Message Size: 262144
I1222 00:47:54.417940 1938910 out.go:179]   - Options:      map[]
I1222 00:47:54.420715 1938910 out.go:179]   - Bind Address: 192.168.49.1:39323
I1222 00:47:54.423434 1938910 out.go:179] * Userspace file server: 
I1222 00:47:54.426355 1938910 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1222 00:47:54.426437 1938910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
I1222 00:47:54.447006 1938910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
I1222 00:47:54.544272 1938910 mount.go:180] unmount for /mount-9p ran successfully
I1222 00:47:54.544299 1938910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1222 00:47:54.552986 1938910 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=39323,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1222 00:47:54.563386 1938910 main.go:127] stdlog: ufs.go:141 connected
I1222 00:47:54.563556 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tversion tag 65535 msize 262144 version '9P2000.L'
I1222 00:47:54.563605 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rversion tag 65535 msize 262144 version '9P2000'
I1222 00:47:54.563847 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1222 00:47:54.563915 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rattach tag 0 aqid (3b7b77 4386eb53 'd')
I1222 00:47:54.564623 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 0
I1222 00:47:54.564688 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b7b77 4386eb53 'd') m d775 at 0 mt 1766364474 l 4096 t 0 d 0 ext )
I1222 00:47:54.577355 1938910 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/.mount-process: {Name:mk0b4c16dc844beb5fdeefeaed79987327d7cc8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:47:54.577552 1938910 mount.go:105] mount successful: ""
I1222 00:47:54.580981 1938910 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4290489477/001 to /mount-9p
I1222 00:47:54.583863 1938910 out.go:203] 
I1222 00:47:54.586655 1938910 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1222 00:47:55.376171 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 0
I1222 00:47:55.376246 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b7b77 4386eb53 'd') m d775 at 0 mt 1766364474 l 4096 t 0 d 0 ext )
I1222 00:47:55.376621 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 1 
I1222 00:47:55.376665 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 
I1222 00:47:55.376768 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Topen tag 0 fid 1 mode 0
I1222 00:47:55.376829 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Ropen tag 0 qid (3b7b77 4386eb53 'd') iounit 0
I1222 00:47:55.376927 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 0
I1222 00:47:55.376995 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b7b77 4386eb53 'd') m d775 at 0 mt 1766364474 l 4096 t 0 d 0 ext )
I1222 00:47:55.377146 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 1 offset 0 count 262120
I1222 00:47:55.377272 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 258
I1222 00:47:55.377403 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 1 offset 258 count 261862
I1222 00:47:55.377433 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 0
I1222 00:47:55.377543 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 1 offset 258 count 262120
I1222 00:47:55.377576 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 0
I1222 00:47:55.377688 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1222 00:47:55.377725 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 (3b7b78 4386eb53 '') 
I1222 00:47:55.377819 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.377857 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b7b78 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.377952 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.377999 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b7b78 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.378100 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 2
I1222 00:47:55.378130 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.378237 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 2 0:'test-1766364474200773490' 
I1222 00:47:55.378273 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 (3b7b7a 4386eb53 '') 
I1222 00:47:55.378365 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.378406 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('test-1766364474200773490' 'jenkins' 'jenkins' '' q (3b7b7a 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.378525 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.378559 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('test-1766364474200773490' 'jenkins' 'jenkins' '' q (3b7b7a 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.378654 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 2
I1222 00:47:55.378685 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.378800 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1222 00:47:55.378837 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 (3b7b79 4386eb53 '') 
I1222 00:47:55.378933 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.378977 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b7b79 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.379070 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.379112 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b7b79 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.379208 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 2
I1222 00:47:55.379240 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.379334 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 1 offset 258 count 262120
I1222 00:47:55.379372 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 0
I1222 00:47:55.379495 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 1
I1222 00:47:55.379535 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.645683 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 1 0:'test-1766364474200773490' 
I1222 00:47:55.645752 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 (3b7b7a 4386eb53 '') 
I1222 00:47:55.645928 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 1
I1222 00:47:55.645991 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('test-1766364474200773490' 'jenkins' 'jenkins' '' q (3b7b7a 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.646133 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 1 newfid 2 
I1222 00:47:55.646169 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 
I1222 00:47:55.646307 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Topen tag 0 fid 2 mode 0
I1222 00:47:55.646355 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Ropen tag 0 qid (3b7b7a 4386eb53 '') iounit 0
I1222 00:47:55.646479 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 1
I1222 00:47:55.646512 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('test-1766364474200773490' 'jenkins' 'jenkins' '' q (3b7b7a 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.646668 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 2 offset 0 count 262120
I1222 00:47:55.646712 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 24
I1222 00:47:55.646838 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 2 offset 24 count 262120
I1222 00:47:55.646867 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 0
I1222 00:47:55.647014 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 2 offset 24 count 262120
I1222 00:47:55.647062 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 0
I1222 00:47:55.647329 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 2
I1222 00:47:55.647365 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.647520 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 1
I1222 00:47:55.647546 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.960023 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 0
I1222 00:47:55.960097 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b7b77 4386eb53 'd') m d775 at 0 mt 1766364474 l 4096 t 0 d 0 ext )
I1222 00:47:55.960466 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 1 
I1222 00:47:55.960504 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 
I1222 00:47:55.960646 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Topen tag 0 fid 1 mode 0
I1222 00:47:55.960701 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Ropen tag 0 qid (3b7b77 4386eb53 'd') iounit 0
I1222 00:47:55.960852 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 0
I1222 00:47:55.960910 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b7b77 4386eb53 'd') m d775 at 0 mt 1766364474 l 4096 t 0 d 0 ext )
I1222 00:47:55.961046 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 1 offset 0 count 262120
I1222 00:47:55.961156 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 258
I1222 00:47:55.961295 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 1 offset 258 count 261862
I1222 00:47:55.961340 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 0
I1222 00:47:55.961476 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 1 offset 258 count 262120
I1222 00:47:55.961506 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 0
I1222 00:47:55.961650 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1222 00:47:55.961682 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 (3b7b78 4386eb53 '') 
I1222 00:47:55.961803 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.961838 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b7b78 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.961972 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.962004 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b7b78 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.962123 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 2
I1222 00:47:55.962146 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.962286 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 2 0:'test-1766364474200773490' 
I1222 00:47:55.962318 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 (3b7b7a 4386eb53 '') 
I1222 00:47:55.962434 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.962468 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('test-1766364474200773490' 'jenkins' 'jenkins' '' q (3b7b7a 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.962604 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.962640 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('test-1766364474200773490' 'jenkins' 'jenkins' '' q (3b7b7a 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.962753 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 2
I1222 00:47:55.962775 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.962917 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1222 00:47:55.962954 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rwalk tag 0 (3b7b79 4386eb53 '') 
I1222 00:47:55.963083 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.963118 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b7b79 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.963262 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tstat tag 0 fid 2
I1222 00:47:55.963293 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b7b79 4386eb53 '') m 644 at 0 mt 1766364474 l 24 t 0 d 0 ext )
I1222 00:47:55.963406 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 2
I1222 00:47:55.963429 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.963556 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tread tag 0 fid 1 offset 258 count 262120
I1222 00:47:55.963586 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rread tag 0 count 0
I1222 00:47:55.963718 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 1
I1222 00:47:55.963755 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:55.964972 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1222 00:47:55.965040 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rerror tag 0 ename 'file not found' ecode 0
I1222 00:47:56.243880 1938910 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:47012 Tclunk tag 0 fid 0
I1222 00:47:56.243931 1938910 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:47012 Rclunk tag 0
I1222 00:47:56.244922 1938910 main.go:127] stdlog: ufs.go:147 disconnected
I1222 00:47:56.272734 1938910 out.go:179] * Unmounting /mount-9p ...
I1222 00:47:56.275837 1938910 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1222 00:47:56.282919 1938910 mount.go:180] unmount for /mount-9p ran successfully
I1222 00:47:56.283051 1938910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/.mount-process: {Name:mk0b4c16dc844beb5fdeefeaed79987327d7cc8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:47:56.286302 1938910 out.go:203] 
W1222 00:47:56.289163 1938910 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1222 00:47:56.291990 1938910 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.17s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-069941 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-069941 --output=json --user=testUser: exit status 80 (1.723068381s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ae6c8ff-d50d-4a49-89f7-3bb5a6c904e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-069941 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"4b1b0cc3-63ee-4a4c-a671-d9c5ed89874c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-22T01:01:09Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"e9c7f61a-a41a-4be5-b300-842e0a94e5f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-069941 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.95s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-069941 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-069941 --output=json --user=testUser: exit status 80 (1.951914892s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3f7cc4f-fc37-41a3-ac81-de24b1f4c956","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-069941 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"8f5f5735-e893-459e-b384-4c1dde749f42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-22T01:01:11Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"2c7ca08d-0a17-4ef4-8670-b08b132f2394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-069941 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (790.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-951043 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-951043 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.937710646s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-951043
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-951043: (1.452551067s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-951043 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-951043 status --format={{.Host}}: exit status 7 (204.561899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-951043 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-951043 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m19.531726302s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-951043] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-951043" primary control-plane node in "kubernetes-upgrade-951043" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:16:13.104034 2054169 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:16:13.106484 2054169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:16:13.106503 2054169 out.go:374] Setting ErrFile to fd 2...
	I1222 01:16:13.106509 2054169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:16:13.106783 2054169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:16:13.107193 2054169 out.go:368] Setting JSON to false
	I1222 01:16:13.108509 2054169 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115122,"bootTime":1766251051,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:16:13.108581 2054169 start.go:143] virtualization:  
	I1222 01:16:13.112041 2054169 out.go:179] * [kubernetes-upgrade-951043] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:16:13.115915 2054169 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:16:13.116142 2054169 notify.go:221] Checking for updates...
	I1222 01:16:13.122260 2054169 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:16:13.125189 2054169 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:16:13.128023 2054169 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:16:13.131043 2054169 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:16:13.133746 2054169 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:16:13.137058 2054169 config.go:182] Loaded profile config "kubernetes-upgrade-951043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1222 01:16:13.137702 2054169 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:16:13.193926 2054169 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:16:13.194055 2054169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:16:13.319749 2054169 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:16:13.30437149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:16:13.319853 2054169 docker.go:319] overlay module found
	I1222 01:16:13.322905 2054169 out.go:179] * Using the docker driver based on existing profile
	I1222 01:16:13.326459 2054169 start.go:309] selected driver: docker
	I1222 01:16:13.326483 2054169 start.go:928] validating driver "docker" against &{Name:kubernetes-upgrade-951043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-951043 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:16:13.326567 2054169 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:16:13.327240 2054169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:16:13.449461 2054169 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:16:13.438095332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:16:13.449774 2054169 cni.go:84] Creating CNI manager for ""
	I1222 01:16:13.449836 2054169 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:16:13.449885 2054169 start.go:353] cluster config:
	{Name:kubernetes-upgrade-951043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-951043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:16:13.455431 2054169 out.go:179] * Starting "kubernetes-upgrade-951043" primary control-plane node in "kubernetes-upgrade-951043" cluster
	I1222 01:16:13.458320 2054169 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:16:13.461276 2054169 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:16:13.464220 2054169 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:16:13.464273 2054169 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:16:13.464283 2054169 cache.go:65] Caching tarball of preloaded images
	I1222 01:16:13.464321 2054169 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:16:13.464367 2054169 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:16:13.464377 2054169 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:16:13.464482 2054169 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/config.json ...
	I1222 01:16:13.492223 2054169 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:16:13.492248 2054169 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:16:13.492263 2054169 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:16:13.492293 2054169 start.go:360] acquireMachinesLock for kubernetes-upgrade-951043: {Name:mkecdeb126994039e5cbc95474ab625bb32b9e3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:16:13.492352 2054169 start.go:364] duration metric: took 36.832µs to acquireMachinesLock for "kubernetes-upgrade-951043"
	I1222 01:16:13.492376 2054169 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:16:13.492385 2054169 fix.go:54] fixHost starting: 
	I1222 01:16:13.492652 2054169 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-951043 --format={{.State.Status}}
	I1222 01:16:13.525657 2054169 fix.go:112] recreateIfNeeded on kubernetes-upgrade-951043: state=Stopped err=<nil>
	W1222 01:16:13.525690 2054169 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:16:13.528968 2054169 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-951043" ...
	I1222 01:16:13.529054 2054169 cli_runner.go:164] Run: docker start kubernetes-upgrade-951043
	I1222 01:16:13.881454 2054169 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-951043 --format={{.State.Status}}
	I1222 01:16:13.924123 2054169 kic.go:430] container "kubernetes-upgrade-951043" state is running.
	I1222 01:16:13.924503 2054169 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-951043
	I1222 01:16:13.947895 2054169 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/config.json ...
	I1222 01:16:13.948126 2054169 machine.go:94] provisionDockerMachine start ...
	I1222 01:16:13.948185 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:13.971040 2054169 main.go:144] libmachine: Using SSH client type: native
	I1222 01:16:13.971379 2054169 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38681 <nil> <nil>}
	I1222 01:16:13.971389 2054169 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:16:13.972214 2054169 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:16:17.109284 2054169 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-951043
	
	I1222 01:16:17.109309 2054169 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-951043"
	I1222 01:16:17.109401 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:17.132648 2054169 main.go:144] libmachine: Using SSH client type: native
	I1222 01:16:17.133116 2054169 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38681 <nil> <nil>}
	I1222 01:16:17.133133 2054169 main.go:144] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-951043 && echo "kubernetes-upgrade-951043" | sudo tee /etc/hostname
	I1222 01:16:17.273870 2054169 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-951043
	
	I1222 01:16:17.273957 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:17.290931 2054169 main.go:144] libmachine: Using SSH client type: native
	I1222 01:16:17.291243 2054169 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38681 <nil> <nil>}
	I1222 01:16:17.291265 2054169 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-951043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-951043/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-951043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:16:17.425783 2054169 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:16:17.425813 2054169 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:16:17.425857 2054169 ubuntu.go:190] setting up certificates
	I1222 01:16:17.425874 2054169 provision.go:84] configureAuth start
	I1222 01:16:17.425948 2054169 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-951043
	I1222 01:16:17.444636 2054169 provision.go:143] copyHostCerts
	I1222 01:16:17.444707 2054169 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:16:17.444730 2054169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:16:17.444811 2054169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:16:17.444921 2054169 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:16:17.444932 2054169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:16:17.444960 2054169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:16:17.445021 2054169 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:16:17.445029 2054169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:16:17.445053 2054169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:16:17.445112 2054169 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-951043 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-951043 localhost minikube]
	I1222 01:16:17.838451 2054169 provision.go:177] copyRemoteCerts
	I1222 01:16:17.838562 2054169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:16:17.838637 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:17.855456 2054169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38681 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kubernetes-upgrade-951043/id_rsa Username:docker}
	I1222 01:16:17.948758 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:16:17.965522 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1222 01:16:17.983320 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:16:18.001842 2054169 provision.go:87] duration metric: took 575.933843ms to configureAuth
	I1222 01:16:18.001875 2054169 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:16:18.002107 2054169 config.go:182] Loaded profile config "kubernetes-upgrade-951043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:16:18.002251 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:18.021885 2054169 main.go:144] libmachine: Using SSH client type: native
	I1222 01:16:18.022220 2054169 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38681 <nil> <nil>}
	I1222 01:16:18.022244 2054169 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:16:18.383552 2054169 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:16:18.383572 2054169 machine.go:97] duration metric: took 4.435436961s to provisionDockerMachine
	I1222 01:16:18.383584 2054169 start.go:293] postStartSetup for "kubernetes-upgrade-951043" (driver="docker")
	I1222 01:16:18.383595 2054169 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:16:18.383660 2054169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:16:18.383712 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:18.400550 2054169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38681 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kubernetes-upgrade-951043/id_rsa Username:docker}
	I1222 01:16:18.497239 2054169 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:16:18.500542 2054169 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:16:18.500573 2054169 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:16:18.500584 2054169 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:16:18.500638 2054169 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:16:18.500734 2054169 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:16:18.500849 2054169 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:16:18.508436 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:16:18.525606 2054169 start.go:296] duration metric: took 142.006699ms for postStartSetup
	I1222 01:16:18.525705 2054169 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:16:18.525742 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:18.542182 2054169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38681 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kubernetes-upgrade-951043/id_rsa Username:docker}
	I1222 01:16:18.634184 2054169 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:16:18.638440 2054169 fix.go:56] duration metric: took 5.146048305s for fixHost
	I1222 01:16:18.638462 2054169 start.go:83] releasing machines lock for "kubernetes-upgrade-951043", held for 5.146097887s
	I1222 01:16:18.638530 2054169 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-951043
	I1222 01:16:18.655777 2054169 ssh_runner.go:195] Run: cat /version.json
	I1222 01:16:18.655796 2054169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:16:18.655830 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:18.655853 2054169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-951043
	I1222 01:16:18.674164 2054169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38681 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kubernetes-upgrade-951043/id_rsa Username:docker}
	I1222 01:16:18.674764 2054169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38681 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kubernetes-upgrade-951043/id_rsa Username:docker}
	I1222 01:16:18.768906 2054169 ssh_runner.go:195] Run: systemctl --version
	I1222 01:16:18.865973 2054169 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:16:18.901833 2054169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:16:18.906193 2054169 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:16:18.906269 2054169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:16:18.914055 2054169 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:16:18.914081 2054169 start.go:496] detecting cgroup driver to use...
	I1222 01:16:18.914112 2054169 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:16:18.914161 2054169 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:16:18.928600 2054169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:16:18.941525 2054169 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:16:18.941589 2054169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:16:18.957211 2054169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:16:18.970361 2054169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:16:19.077061 2054169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:16:19.193255 2054169 docker.go:234] disabling docker service ...
	I1222 01:16:19.193419 2054169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:16:19.208118 2054169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:16:19.220776 2054169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:16:19.336790 2054169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:16:19.492244 2054169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:16:19.507116 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:16:19.522508 2054169 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:16:19.522577 2054169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:16:19.532203 2054169 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:16:19.532286 2054169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:16:19.546022 2054169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:16:19.557817 2054169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:16:19.567143 2054169 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:16:19.575518 2054169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:16:19.584590 2054169 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:16:19.596756 2054169 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:16:19.605711 2054169 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:16:19.613476 2054169 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:16:19.620961 2054169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:16:19.770641 2054169 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:16:19.958427 2054169 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:16:19.958503 2054169 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:16:19.962937 2054169 start.go:564] Will wait 60s for crictl version
	I1222 01:16:19.962996 2054169 ssh_runner.go:195] Run: which crictl
	I1222 01:16:19.968382 2054169 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:16:20.015057 2054169 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:16:20.015154 2054169 ssh_runner.go:195] Run: crio --version
	I1222 01:16:20.057650 2054169 ssh_runner.go:195] Run: crio --version
	I1222 01:16:20.108921 2054169 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:16:20.111765 2054169 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-951043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:16:20.137968 2054169 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:16:20.142301 2054169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:16:20.151919 2054169 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-951043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-951043 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:16:20.152066 2054169 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:16:20.152123 2054169 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:16:20.189169 2054169 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1222 01:16:20.189248 2054169 ssh_runner.go:195] Run: which lz4
	I1222 01:16:20.193189 2054169 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1222 01:16:20.197420 2054169 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1222 01:16:20.197455 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306154261 bytes)
	I1222 01:16:22.145506 2054169 crio.go:462] duration metric: took 1.952355684s to copy over tarball
	I1222 01:16:22.145615 2054169 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1222 01:16:24.221221 2054169 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.075566192s)
	I1222 01:16:24.221254 2054169 crio.go:469] duration metric: took 2.075716621s to extract the tarball
	I1222 01:16:24.221262 2054169 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1222 01:16:24.282142 2054169 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:16:24.324528 2054169 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:16:24.324555 2054169 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:16:24.324565 2054169 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:16:24.324686 2054169 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-951043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-951043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:16:24.324781 2054169 ssh_runner.go:195] Run: crio config
	I1222 01:16:24.422326 2054169 cni.go:84] Creating CNI manager for ""
	I1222 01:16:24.422361 2054169 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:16:24.422396 2054169 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:16:24.422447 2054169 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-951043 NodeName:kubernetes-upgrade-951043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:16:24.422622 2054169 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-951043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:16:24.422723 2054169 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:16:24.430771 2054169 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:16:24.430891 2054169 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:16:24.438620 2054169 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1222 01:16:24.452175 2054169 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:16:24.465568 2054169 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1222 01:16:24.481801 2054169 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:16:24.489419 2054169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:16:24.511770 2054169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:16:24.666825 2054169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:16:24.684579 2054169 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043 for IP: 192.168.76.2
	I1222 01:16:24.684641 2054169 certs.go:195] generating shared ca certs ...
	I1222 01:16:24.684683 2054169 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:16:24.684857 2054169 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:16:24.684933 2054169 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:16:24.684969 2054169 certs.go:257] generating profile certs ...
	I1222 01:16:24.685099 2054169 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/client.key
	I1222 01:16:24.685185 2054169 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/apiserver.key.033db96f
	I1222 01:16:24.685258 2054169 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/proxy-client.key
	I1222 01:16:24.685425 2054169 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:16:24.685481 2054169 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:16:24.685512 2054169 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:16:24.685568 2054169 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:16:24.685624 2054169 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:16:24.685679 2054169 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:16:24.685764 2054169 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:16:24.686374 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:16:24.711209 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:16:24.735295 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:16:24.759407 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:16:24.782558 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1222 01:16:24.808697 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:16:24.827017 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:16:24.846984 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:16:24.867116 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:16:24.887136 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:16:24.905651 2054169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:16:24.929051 2054169 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:16:24.942776 2054169 ssh_runner.go:195] Run: openssl version
	I1222 01:16:24.949268 2054169 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:16:24.956840 2054169 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:16:24.964281 2054169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:16:24.968884 2054169 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:16:24.968953 2054169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:16:25.014640 2054169 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:16:25.022985 2054169 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:16:25.030788 2054169 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:16:25.038440 2054169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:16:25.042485 2054169 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:16:25.042580 2054169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:16:25.083696 2054169 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:16:25.091941 2054169 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:16:25.100066 2054169 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:16:25.108407 2054169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:16:25.113273 2054169 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:16:25.113484 2054169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:16:25.160760 2054169 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:16:25.168136 2054169 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:16:25.172103 2054169 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:16:25.218138 2054169 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:16:25.260457 2054169 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:16:25.301986 2054169 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:16:25.344941 2054169 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:16:25.389014 2054169 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:16:25.432409 2054169 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-951043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-951043 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:16:25.432522 2054169 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:16:25.432587 2054169 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:16:25.462158 2054169 cri.go:96] found id: ""
	I1222 01:16:25.462228 2054169 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:16:25.470214 2054169 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:16:25.470237 2054169 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:16:25.470298 2054169 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:16:25.477399 2054169 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:16:25.477778 2054169 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-951043" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:16:25.477880 2054169 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-951043" cluster setting kubeconfig missing "kubernetes-upgrade-951043" context setting]
	I1222 01:16:25.478173 2054169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:16:25.479958 2054169 kapi.go:59] client config for kubernetes-upgrade-951043: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kubernetes-upgrade-951043/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 01:16:25.480455 2054169 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 01:16:25.480472 2054169 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 01:16:25.480478 2054169 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 01:16:25.480482 2054169 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 01:16:25.480486 2054169 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 01:16:25.480753 2054169 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:16:25.491117 2054169 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 01:15:44.560856141 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 01:16:24.475636625 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-951043"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1222 01:16:25.491138 2054169 kubeadm.go:1161] stopping kube-system containers ...
	I1222 01:16:25.491148 2054169 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1222 01:16:25.491229 2054169 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:16:25.519285 2054169 cri.go:96] found id: ""
	I1222 01:16:25.519372 2054169 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 01:16:25.536998 2054169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:16:25.544788 2054169 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 22 01:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 22 01:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 22 01:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 22 01:15 /etc/kubernetes/scheduler.conf
	
	I1222 01:16:25.544875 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:16:25.552686 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:16:25.560141 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:16:25.568395 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:16:25.568463 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:16:25.575513 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:16:25.583790 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:16:25.583859 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:16:25.591293 2054169 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:16:25.600139 2054169 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:16:25.652833 2054169 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:16:26.954701 2054169 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.301794339s)
	I1222 01:16:26.954770 2054169 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:16:27.218308 2054169 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:16:27.329117 2054169 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:16:27.384163 2054169 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:16:27.384243 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:27.884491 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:28.384443 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:28.884398 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:29.384674 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:29.884385 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:30.384371 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:30.885278 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:31.384778 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:31.884408 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:32.384314 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:32.884361 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:33.385022 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:33.884365 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:34.385140 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:34.885038 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:35.384331 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:35.885183 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:36.384500 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:36.884566 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:37.385051 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:37.884404 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:38.384843 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:38.884376 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:39.385053 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:39.884363 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:40.385063 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:40.884428 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:41.384787 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:41.884428 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:42.384393 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:42.885470 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:43.384937 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:43.884831 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:44.384471 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:44.885354 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:45.385273 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:45.885095 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:46.384423 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:46.885171 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:47.384375 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:47.884407 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:48.385061 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:48.884778 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:49.385144 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:49.884380 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:50.384429 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:50.885016 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:51.385124 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:51.884829 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:52.384491 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:52.884390 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:53.384331 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:53.885199 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:54.384371 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:54.885365 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:55.385194 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:55.885162 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:56.385215 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:56.884442 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:57.384367 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:57.884969 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:58.384428 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:58.884979 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:59.384384 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:16:59.885203 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:00.384763 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:00.884521 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:01.385073 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:01.885309 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:02.384995 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:02.884475 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:03.384407 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:03.885010 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:04.384344 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:04.884349 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:05.385125 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:05.884591 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:06.384397 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:06.885020 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:07.384323 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:07.884712 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:08.384474 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:08.884598 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:09.385356 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:09.884408 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:10.384429 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:10.885221 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:11.385250 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:11.884372 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:12.384313 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:12.884430 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:13.385268 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:13.885257 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:14.385289 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:14.885100 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:15.384964 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:15.884418 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:16.384409 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:16.885173 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:17.384411 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:17.885318 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:18.384693 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:18.885091 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:19.384730 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:19.884923 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:20.385187 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:20.885408 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:21.385364 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:21.884512 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:22.384479 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:22.885304 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:23.385279 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:23.884782 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:24.384475 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:24.884448 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:25.384445 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:25.884310 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:26.385137 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:26.884423 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:27.385136 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:27.385223 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:27.416618 2054169 cri.go:96] found id: ""
	I1222 01:17:27.416649 2054169 logs.go:282] 0 containers: []
	W1222 01:17:27.416658 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:27.416665 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:27.416724 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:27.443732 2054169 cri.go:96] found id: ""
	I1222 01:17:27.443754 2054169 logs.go:282] 0 containers: []
	W1222 01:17:27.443762 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:27.443769 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:27.443829 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:27.469953 2054169 cri.go:96] found id: ""
	I1222 01:17:27.469985 2054169 logs.go:282] 0 containers: []
	W1222 01:17:27.469994 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:27.470001 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:27.470060 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:27.496114 2054169 cri.go:96] found id: ""
	I1222 01:17:27.496140 2054169 logs.go:282] 0 containers: []
	W1222 01:17:27.496148 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:27.496154 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:27.496215 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:27.522648 2054169 cri.go:96] found id: ""
	I1222 01:17:27.522675 2054169 logs.go:282] 0 containers: []
	W1222 01:17:27.522684 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:27.522690 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:27.522754 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:27.548499 2054169 cri.go:96] found id: ""
	I1222 01:17:27.548526 2054169 logs.go:282] 0 containers: []
	W1222 01:17:27.548535 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:27.548541 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:27.548608 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:27.578237 2054169 cri.go:96] found id: ""
	I1222 01:17:27.578260 2054169 logs.go:282] 0 containers: []
	W1222 01:17:27.578269 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:27.578275 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:27.578333 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:27.612169 2054169 cri.go:96] found id: ""
	I1222 01:17:27.612196 2054169 logs.go:282] 0 containers: []
	W1222 01:17:27.612204 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:27.612214 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:27.612225 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:27.691967 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:27.692003 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:27.708799 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:27.708828 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:27.771314 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:27.771336 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:27.771348 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:27.802211 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:27.802247 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:30.335719 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:30.347674 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:30.347745 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:30.379812 2054169 cri.go:96] found id: ""
	I1222 01:17:30.379834 2054169 logs.go:282] 0 containers: []
	W1222 01:17:30.379843 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:30.379849 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:30.379911 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:30.424464 2054169 cri.go:96] found id: ""
	I1222 01:17:30.424485 2054169 logs.go:282] 0 containers: []
	W1222 01:17:30.424493 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:30.424499 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:30.424562 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:30.461296 2054169 cri.go:96] found id: ""
	I1222 01:17:30.461318 2054169 logs.go:282] 0 containers: []
	W1222 01:17:30.461358 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:30.461365 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:30.461426 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:30.490599 2054169 cri.go:96] found id: ""
	I1222 01:17:30.490620 2054169 logs.go:282] 0 containers: []
	W1222 01:17:30.490629 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:30.490635 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:30.490694 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:30.526753 2054169 cri.go:96] found id: ""
	I1222 01:17:30.526782 2054169 logs.go:282] 0 containers: []
	W1222 01:17:30.526790 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:30.526797 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:30.526859 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:30.554885 2054169 cri.go:96] found id: ""
	I1222 01:17:30.554909 2054169 logs.go:282] 0 containers: []
	W1222 01:17:30.554918 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:30.554925 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:30.554990 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:30.580938 2054169 cri.go:96] found id: ""
	I1222 01:17:30.580964 2054169 logs.go:282] 0 containers: []
	W1222 01:17:30.580973 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:30.580980 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:30.581044 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:30.605403 2054169 cri.go:96] found id: ""
	I1222 01:17:30.605427 2054169 logs.go:282] 0 containers: []
	W1222 01:17:30.605435 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:30.605444 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:30.605455 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:30.669114 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:30.669136 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:30.669148 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:30.699684 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:30.699718 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:30.729684 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:30.729712 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:30.795139 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:30.795173 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:33.311752 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:33.321684 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:33.321753 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:33.354015 2054169 cri.go:96] found id: ""
	I1222 01:17:33.354044 2054169 logs.go:282] 0 containers: []
	W1222 01:17:33.354053 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:33.354059 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:33.354118 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:33.383487 2054169 cri.go:96] found id: ""
	I1222 01:17:33.383521 2054169 logs.go:282] 0 containers: []
	W1222 01:17:33.383530 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:33.383537 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:33.383597 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:33.418561 2054169 cri.go:96] found id: ""
	I1222 01:17:33.418585 2054169 logs.go:282] 0 containers: []
	W1222 01:17:33.418593 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:33.418599 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:33.418664 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:33.446424 2054169 cri.go:96] found id: ""
	I1222 01:17:33.446448 2054169 logs.go:282] 0 containers: []
	W1222 01:17:33.446458 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:33.446465 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:33.446529 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:33.471488 2054169 cri.go:96] found id: ""
	I1222 01:17:33.471511 2054169 logs.go:282] 0 containers: []
	W1222 01:17:33.471520 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:33.471526 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:33.471643 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:33.505346 2054169 cri.go:96] found id: ""
	I1222 01:17:33.505370 2054169 logs.go:282] 0 containers: []
	W1222 01:17:33.505379 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:33.505385 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:33.505446 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:33.532705 2054169 cri.go:96] found id: ""
	I1222 01:17:33.532731 2054169 logs.go:282] 0 containers: []
	W1222 01:17:33.532740 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:33.532746 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:33.532810 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:33.558011 2054169 cri.go:96] found id: ""
	I1222 01:17:33.558036 2054169 logs.go:282] 0 containers: []
	W1222 01:17:33.558045 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:33.558054 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:33.558066 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:33.586453 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:33.586524 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:33.656338 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:33.656377 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:33.673657 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:33.673688 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:33.742046 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:33.742111 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:33.742136 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:36.273851 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:36.284256 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:36.284346 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:36.311056 2054169 cri.go:96] found id: ""
	I1222 01:17:36.311084 2054169 logs.go:282] 0 containers: []
	W1222 01:17:36.311093 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:36.311100 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:36.311162 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:36.345964 2054169 cri.go:96] found id: ""
	I1222 01:17:36.345987 2054169 logs.go:282] 0 containers: []
	W1222 01:17:36.345996 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:36.346002 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:36.346061 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:36.382022 2054169 cri.go:96] found id: ""
	I1222 01:17:36.382045 2054169 logs.go:282] 0 containers: []
	W1222 01:17:36.382053 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:36.382059 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:36.382121 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:36.415770 2054169 cri.go:96] found id: ""
	I1222 01:17:36.415792 2054169 logs.go:282] 0 containers: []
	W1222 01:17:36.415800 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:36.415806 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:36.415866 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:36.442146 2054169 cri.go:96] found id: ""
	I1222 01:17:36.442169 2054169 logs.go:282] 0 containers: []
	W1222 01:17:36.442177 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:36.442183 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:36.442247 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:36.469027 2054169 cri.go:96] found id: ""
	I1222 01:17:36.469048 2054169 logs.go:282] 0 containers: []
	W1222 01:17:36.469074 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:36.469081 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:36.469141 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:36.496275 2054169 cri.go:96] found id: ""
	I1222 01:17:36.496297 2054169 logs.go:282] 0 containers: []
	W1222 01:17:36.496305 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:36.496311 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:36.496371 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:36.522518 2054169 cri.go:96] found id: ""
	I1222 01:17:36.522540 2054169 logs.go:282] 0 containers: []
	W1222 01:17:36.522548 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:36.522557 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:36.522569 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:36.592367 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:36.592389 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:36.592402 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:36.623705 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:36.623742 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:36.656708 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:36.656783 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:36.723438 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:36.723520 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:39.240156 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:39.251833 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:39.251897 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:39.280260 2054169 cri.go:96] found id: ""
	I1222 01:17:39.280281 2054169 logs.go:282] 0 containers: []
	W1222 01:17:39.280290 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:39.280296 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:39.280353 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:39.312108 2054169 cri.go:96] found id: ""
	I1222 01:17:39.312136 2054169 logs.go:282] 0 containers: []
	W1222 01:17:39.312144 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:39.312151 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:39.312209 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:39.359873 2054169 cri.go:96] found id: ""
	I1222 01:17:39.359895 2054169 logs.go:282] 0 containers: []
	W1222 01:17:39.359904 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:39.359910 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:39.359970 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:39.418507 2054169 cri.go:96] found id: ""
	I1222 01:17:39.418529 2054169 logs.go:282] 0 containers: []
	W1222 01:17:39.418538 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:39.418544 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:39.418604 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:39.483020 2054169 cri.go:96] found id: ""
	I1222 01:17:39.483042 2054169 logs.go:282] 0 containers: []
	W1222 01:17:39.483050 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:39.483057 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:39.483123 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:39.521559 2054169 cri.go:96] found id: ""
	I1222 01:17:39.521581 2054169 logs.go:282] 0 containers: []
	W1222 01:17:39.521590 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:39.521596 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:39.521654 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:39.571333 2054169 cri.go:96] found id: ""
	I1222 01:17:39.571354 2054169 logs.go:282] 0 containers: []
	W1222 01:17:39.571362 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:39.571368 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:39.571432 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:39.611140 2054169 cri.go:96] found id: ""
	I1222 01:17:39.611170 2054169 logs.go:282] 0 containers: []
	W1222 01:17:39.611179 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:39.611188 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:39.611200 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:39.629806 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:39.629892 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:39.713452 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:39.713523 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:39.713551 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:39.750891 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:39.750923 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:39.796420 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:39.796502 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:42.393451 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:42.405169 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:42.405243 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:42.431940 2054169 cri.go:96] found id: ""
	I1222 01:17:42.431967 2054169 logs.go:282] 0 containers: []
	W1222 01:17:42.431976 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:42.431982 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:42.432044 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:42.457863 2054169 cri.go:96] found id: ""
	I1222 01:17:42.457888 2054169 logs.go:282] 0 containers: []
	W1222 01:17:42.457897 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:42.457903 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:42.457986 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:42.483734 2054169 cri.go:96] found id: ""
	I1222 01:17:42.483757 2054169 logs.go:282] 0 containers: []
	W1222 01:17:42.483765 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:42.483772 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:42.483834 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:42.516644 2054169 cri.go:96] found id: ""
	I1222 01:17:42.516669 2054169 logs.go:282] 0 containers: []
	W1222 01:17:42.516677 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:42.516684 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:42.516746 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:42.543787 2054169 cri.go:96] found id: ""
	I1222 01:17:42.543812 2054169 logs.go:282] 0 containers: []
	W1222 01:17:42.543833 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:42.543840 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:42.543901 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:42.569210 2054169 cri.go:96] found id: ""
	I1222 01:17:42.569236 2054169 logs.go:282] 0 containers: []
	W1222 01:17:42.569244 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:42.569250 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:42.569354 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:42.594526 2054169 cri.go:96] found id: ""
	I1222 01:17:42.594553 2054169 logs.go:282] 0 containers: []
	W1222 01:17:42.594562 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:42.594569 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:42.594630 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:42.620505 2054169 cri.go:96] found id: ""
	I1222 01:17:42.620529 2054169 logs.go:282] 0 containers: []
	W1222 01:17:42.620537 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:42.620546 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:42.620560 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:42.687760 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:42.687797 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:42.704598 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:42.704640 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:42.773262 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:42.773291 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:42.773304 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:42.804404 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:42.804439 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:45.333700 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:45.348960 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:45.349039 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:45.377250 2054169 cri.go:96] found id: ""
	I1222 01:17:45.377283 2054169 logs.go:282] 0 containers: []
	W1222 01:17:45.377292 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:45.377299 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:45.377398 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:45.405596 2054169 cri.go:96] found id: ""
	I1222 01:17:45.405630 2054169 logs.go:282] 0 containers: []
	W1222 01:17:45.405639 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:45.405647 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:45.405714 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:45.434351 2054169 cri.go:96] found id: ""
	I1222 01:17:45.434374 2054169 logs.go:282] 0 containers: []
	W1222 01:17:45.434383 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:45.434389 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:45.434448 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:45.463603 2054169 cri.go:96] found id: ""
	I1222 01:17:45.463632 2054169 logs.go:282] 0 containers: []
	W1222 01:17:45.463641 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:45.463648 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:45.463715 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:45.489819 2054169 cri.go:96] found id: ""
	I1222 01:17:45.489845 2054169 logs.go:282] 0 containers: []
	W1222 01:17:45.489853 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:45.489860 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:45.489931 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:45.514276 2054169 cri.go:96] found id: ""
	I1222 01:17:45.514301 2054169 logs.go:282] 0 containers: []
	W1222 01:17:45.514309 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:45.514316 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:45.514376 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:45.539742 2054169 cri.go:96] found id: ""
	I1222 01:17:45.539765 2054169 logs.go:282] 0 containers: []
	W1222 01:17:45.539774 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:45.539780 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:45.539844 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:45.565687 2054169 cri.go:96] found id: ""
	I1222 01:17:45.565713 2054169 logs.go:282] 0 containers: []
	W1222 01:17:45.565721 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:45.565729 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:45.565741 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:45.596162 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:45.596195 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:45.627595 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:45.627665 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:45.695149 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:45.695184 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:45.711060 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:45.711089 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:45.774949 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:48.275868 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:48.285614 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:48.285684 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:48.314710 2054169 cri.go:96] found id: ""
	I1222 01:17:48.314732 2054169 logs.go:282] 0 containers: []
	W1222 01:17:48.314741 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:48.314748 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:48.314806 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:48.356532 2054169 cri.go:96] found id: ""
	I1222 01:17:48.356560 2054169 logs.go:282] 0 containers: []
	W1222 01:17:48.356569 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:48.356575 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:48.356642 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:48.395673 2054169 cri.go:96] found id: ""
	I1222 01:17:48.395701 2054169 logs.go:282] 0 containers: []
	W1222 01:17:48.395710 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:48.395716 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:48.395776 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:48.424821 2054169 cri.go:96] found id: ""
	I1222 01:17:48.424849 2054169 logs.go:282] 0 containers: []
	W1222 01:17:48.424858 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:48.424864 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:48.424930 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:48.450348 2054169 cri.go:96] found id: ""
	I1222 01:17:48.450374 2054169 logs.go:282] 0 containers: []
	W1222 01:17:48.450383 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:48.450389 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:48.450456 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:48.475751 2054169 cri.go:96] found id: ""
	I1222 01:17:48.475778 2054169 logs.go:282] 0 containers: []
	W1222 01:17:48.475788 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:48.475795 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:48.475856 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:48.504283 2054169 cri.go:96] found id: ""
	I1222 01:17:48.504310 2054169 logs.go:282] 0 containers: []
	W1222 01:17:48.504319 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:48.504325 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:48.504386 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:48.529206 2054169 cri.go:96] found id: ""
	I1222 01:17:48.529242 2054169 logs.go:282] 0 containers: []
	W1222 01:17:48.529251 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:48.529259 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:48.529271 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:48.559340 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:48.559370 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:48.588412 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:48.588439 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:48.655582 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:48.655618 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:48.671439 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:48.671471 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:48.734518 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:51.235464 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:51.245683 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:51.245758 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:51.276938 2054169 cri.go:96] found id: ""
	I1222 01:17:51.276967 2054169 logs.go:282] 0 containers: []
	W1222 01:17:51.276979 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:51.276986 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:51.277060 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:51.305699 2054169 cri.go:96] found id: ""
	I1222 01:17:51.305778 2054169 logs.go:282] 0 containers: []
	W1222 01:17:51.305794 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:51.305801 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:51.305864 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:51.340213 2054169 cri.go:96] found id: ""
	I1222 01:17:51.340247 2054169 logs.go:282] 0 containers: []
	W1222 01:17:51.340256 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:51.340262 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:51.340332 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:51.370305 2054169 cri.go:96] found id: ""
	I1222 01:17:51.370326 2054169 logs.go:282] 0 containers: []
	W1222 01:17:51.370335 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:51.370341 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:51.370401 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:51.404828 2054169 cri.go:96] found id: ""
	I1222 01:17:51.404850 2054169 logs.go:282] 0 containers: []
	W1222 01:17:51.404859 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:51.404866 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:51.404925 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:51.431307 2054169 cri.go:96] found id: ""
	I1222 01:17:51.431374 2054169 logs.go:282] 0 containers: []
	W1222 01:17:51.431396 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:51.431407 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:51.431490 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:51.460689 2054169 cri.go:96] found id: ""
	I1222 01:17:51.460715 2054169 logs.go:282] 0 containers: []
	W1222 01:17:51.460724 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:51.460731 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:51.460842 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:51.489695 2054169 cri.go:96] found id: ""
	I1222 01:17:51.489763 2054169 logs.go:282] 0 containers: []
	W1222 01:17:51.489786 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:51.489803 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:51.489816 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:51.520940 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:51.520976 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:51.554413 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:51.554438 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:51.620050 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:51.620085 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:51.636187 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:51.636214 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:51.701304 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:54.202009 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:54.211933 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:54.212002 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:54.238460 2054169 cri.go:96] found id: ""
	I1222 01:17:54.238482 2054169 logs.go:282] 0 containers: []
	W1222 01:17:54.238491 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:54.238497 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:54.238558 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:54.264612 2054169 cri.go:96] found id: ""
	I1222 01:17:54.264640 2054169 logs.go:282] 0 containers: []
	W1222 01:17:54.264649 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:54.264655 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:54.264716 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:54.290046 2054169 cri.go:96] found id: ""
	I1222 01:17:54.290070 2054169 logs.go:282] 0 containers: []
	W1222 01:17:54.290085 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:54.290092 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:54.290154 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:54.316253 2054169 cri.go:96] found id: ""
	I1222 01:17:54.316277 2054169 logs.go:282] 0 containers: []
	W1222 01:17:54.316294 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:54.316302 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:54.316365 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:54.354164 2054169 cri.go:96] found id: ""
	I1222 01:17:54.354192 2054169 logs.go:282] 0 containers: []
	W1222 01:17:54.354201 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:54.354208 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:54.354275 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:54.383562 2054169 cri.go:96] found id: ""
	I1222 01:17:54.383584 2054169 logs.go:282] 0 containers: []
	W1222 01:17:54.383592 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:54.383598 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:54.383704 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:54.417945 2054169 cri.go:96] found id: ""
	I1222 01:17:54.417966 2054169 logs.go:282] 0 containers: []
	W1222 01:17:54.417975 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:54.417981 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:54.418045 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:54.443547 2054169 cri.go:96] found id: ""
	I1222 01:17:54.443575 2054169 logs.go:282] 0 containers: []
	W1222 01:17:54.443583 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:54.443592 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:54.443603 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:54.474764 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:54.474797 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:54.504136 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:54.504165 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:17:54.575401 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:54.575437 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:54.591254 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:54.591283 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:54.658838 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:57.160363 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:17:57.173719 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:17:57.173800 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:17:57.218774 2054169 cri.go:96] found id: ""
	I1222 01:17:57.218808 2054169 logs.go:282] 0 containers: []
	W1222 01:17:57.218817 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:17:57.218827 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:17:57.218908 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:17:57.253033 2054169 cri.go:96] found id: ""
	I1222 01:17:57.253067 2054169 logs.go:282] 0 containers: []
	W1222 01:17:57.253076 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:17:57.253083 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:17:57.253142 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:17:57.307763 2054169 cri.go:96] found id: ""
	I1222 01:17:57.307807 2054169 logs.go:282] 0 containers: []
	W1222 01:17:57.307817 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:17:57.307823 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:17:57.307882 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:17:57.371119 2054169 cri.go:96] found id: ""
	I1222 01:17:57.371142 2054169 logs.go:282] 0 containers: []
	W1222 01:17:57.371151 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:17:57.371157 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:17:57.371330 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:17:57.429519 2054169 cri.go:96] found id: ""
	I1222 01:17:57.429541 2054169 logs.go:282] 0 containers: []
	W1222 01:17:57.429550 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:17:57.429556 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:17:57.429615 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:17:57.463470 2054169 cri.go:96] found id: ""
	I1222 01:17:57.463493 2054169 logs.go:282] 0 containers: []
	W1222 01:17:57.463501 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:17:57.463508 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:17:57.463566 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:17:57.488844 2054169 cri.go:96] found id: ""
	I1222 01:17:57.488870 2054169 logs.go:282] 0 containers: []
	W1222 01:17:57.488879 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:17:57.488886 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:17:57.488950 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:17:57.516763 2054169 cri.go:96] found id: ""
	I1222 01:17:57.516791 2054169 logs.go:282] 0 containers: []
	W1222 01:17:57.516800 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:17:57.516810 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:17:57.516822 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:17:57.533186 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:17:57.533259 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:17:57.593425 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:17:57.593453 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:17:57.593467 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:17:57.624019 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:17:57.624051 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:17:57.652336 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:17:57.652409 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:00.221675 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:00.269817 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:00.269914 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:00.331310 2054169 cri.go:96] found id: ""
	I1222 01:18:00.331335 2054169 logs.go:282] 0 containers: []
	W1222 01:18:00.331345 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:00.331353 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:00.331427 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:00.384451 2054169 cri.go:96] found id: ""
	I1222 01:18:00.384543 2054169 logs.go:282] 0 containers: []
	W1222 01:18:00.384569 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:00.384592 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:00.384735 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:00.420699 2054169 cri.go:96] found id: ""
	I1222 01:18:00.420724 2054169 logs.go:282] 0 containers: []
	W1222 01:18:00.420733 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:00.420741 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:00.420821 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:00.450979 2054169 cri.go:96] found id: ""
	I1222 01:18:00.451012 2054169 logs.go:282] 0 containers: []
	W1222 01:18:00.451022 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:00.451029 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:00.451090 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:00.478934 2054169 cri.go:96] found id: ""
	I1222 01:18:00.478959 2054169 logs.go:282] 0 containers: []
	W1222 01:18:00.478967 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:00.478974 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:00.479453 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:00.512790 2054169 cri.go:96] found id: ""
	I1222 01:18:00.512819 2054169 logs.go:282] 0 containers: []
	W1222 01:18:00.512828 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:00.512835 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:00.512903 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:00.545804 2054169 cri.go:96] found id: ""
	I1222 01:18:00.545828 2054169 logs.go:282] 0 containers: []
	W1222 01:18:00.545836 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:00.545843 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:00.545904 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:00.571205 2054169 cri.go:96] found id: ""
	I1222 01:18:00.571285 2054169 logs.go:282] 0 containers: []
	W1222 01:18:00.571301 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:00.571311 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:00.571326 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:00.640048 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:00.640085 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:00.656103 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:00.656139 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:00.721508 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:00.721533 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:00.721547 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:00.752615 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:00.752649 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:03.315033 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:03.326553 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:03.326632 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:03.370186 2054169 cri.go:96] found id: ""
	I1222 01:18:03.370211 2054169 logs.go:282] 0 containers: []
	W1222 01:18:03.370220 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:03.370227 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:03.370299 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:03.405413 2054169 cri.go:96] found id: ""
	I1222 01:18:03.405441 2054169 logs.go:282] 0 containers: []
	W1222 01:18:03.405450 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:03.405456 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:03.405520 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:03.436717 2054169 cri.go:96] found id: ""
	I1222 01:18:03.436752 2054169 logs.go:282] 0 containers: []
	W1222 01:18:03.436761 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:03.436767 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:03.436827 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:03.463540 2054169 cri.go:96] found id: ""
	I1222 01:18:03.463567 2054169 logs.go:282] 0 containers: []
	W1222 01:18:03.463577 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:03.463584 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:03.463649 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:03.493610 2054169 cri.go:96] found id: ""
	I1222 01:18:03.493634 2054169 logs.go:282] 0 containers: []
	W1222 01:18:03.493642 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:03.493648 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:03.493710 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:03.519642 2054169 cri.go:96] found id: ""
	I1222 01:18:03.519722 2054169 logs.go:282] 0 containers: []
	W1222 01:18:03.519745 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:03.519774 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:03.519867 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:03.549676 2054169 cri.go:96] found id: ""
	I1222 01:18:03.549700 2054169 logs.go:282] 0 containers: []
	W1222 01:18:03.549708 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:03.549715 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:03.549780 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:03.575714 2054169 cri.go:96] found id: ""
	I1222 01:18:03.575739 2054169 logs.go:282] 0 containers: []
	W1222 01:18:03.575747 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:03.575756 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:03.575768 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:03.642600 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:03.642634 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:03.659103 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:03.659132 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:03.731196 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:03.731222 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:03.731236 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:03.762635 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:03.762670 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:06.295545 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:06.305549 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:06.305624 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:06.331973 2054169 cri.go:96] found id: ""
	I1222 01:18:06.332000 2054169 logs.go:282] 0 containers: []
	W1222 01:18:06.332010 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:06.332016 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:06.332083 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:06.363368 2054169 cri.go:96] found id: ""
	I1222 01:18:06.363390 2054169 logs.go:282] 0 containers: []
	W1222 01:18:06.363398 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:06.363404 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:06.363467 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:06.399797 2054169 cri.go:96] found id: ""
	I1222 01:18:06.399818 2054169 logs.go:282] 0 containers: []
	W1222 01:18:06.399827 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:06.399833 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:06.399893 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:06.426636 2054169 cri.go:96] found id: ""
	I1222 01:18:06.426659 2054169 logs.go:282] 0 containers: []
	W1222 01:18:06.426667 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:06.426673 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:06.426737 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:06.456747 2054169 cri.go:96] found id: ""
	I1222 01:18:06.456770 2054169 logs.go:282] 0 containers: []
	W1222 01:18:06.456778 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:06.456785 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:06.456846 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:06.484554 2054169 cri.go:96] found id: ""
	I1222 01:18:06.484577 2054169 logs.go:282] 0 containers: []
	W1222 01:18:06.484586 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:06.484593 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:06.484661 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:06.512205 2054169 cri.go:96] found id: ""
	I1222 01:18:06.512229 2054169 logs.go:282] 0 containers: []
	W1222 01:18:06.512238 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:06.512248 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:06.512310 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:06.537483 2054169 cri.go:96] found id: ""
	I1222 01:18:06.537505 2054169 logs.go:282] 0 containers: []
	W1222 01:18:06.537514 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:06.537524 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:06.537542 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:06.604775 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:06.604813 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:06.620407 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:06.620450 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:06.682491 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:06.682512 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:06.682529 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:06.713241 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:06.713274 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:09.244811 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:09.255142 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:09.255215 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:09.280549 2054169 cri.go:96] found id: ""
	I1222 01:18:09.280573 2054169 logs.go:282] 0 containers: []
	W1222 01:18:09.280582 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:09.280591 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:09.280666 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:09.306758 2054169 cri.go:96] found id: ""
	I1222 01:18:09.306782 2054169 logs.go:282] 0 containers: []
	W1222 01:18:09.306792 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:09.306798 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:09.306900 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:09.339074 2054169 cri.go:96] found id: ""
	I1222 01:18:09.339097 2054169 logs.go:282] 0 containers: []
	W1222 01:18:09.339105 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:09.339112 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:09.339179 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:09.376652 2054169 cri.go:96] found id: ""
	I1222 01:18:09.376735 2054169 logs.go:282] 0 containers: []
	W1222 01:18:09.376758 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:09.376776 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:09.376889 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:09.406656 2054169 cri.go:96] found id: ""
	I1222 01:18:09.406678 2054169 logs.go:282] 0 containers: []
	W1222 01:18:09.406686 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:09.406693 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:09.406754 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:09.433637 2054169 cri.go:96] found id: ""
	I1222 01:18:09.433669 2054169 logs.go:282] 0 containers: []
	W1222 01:18:09.433677 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:09.433691 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:09.433758 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:09.459495 2054169 cri.go:96] found id: ""
	I1222 01:18:09.459519 2054169 logs.go:282] 0 containers: []
	W1222 01:18:09.459529 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:09.459535 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:09.459598 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:09.485116 2054169 cri.go:96] found id: ""
	I1222 01:18:09.485143 2054169 logs.go:282] 0 containers: []
	W1222 01:18:09.485152 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:09.485160 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:09.485172 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:09.515088 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:09.515121 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:09.545645 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:09.545680 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:09.613723 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:09.613764 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:09.630087 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:09.630117 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:09.699676 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:12.199942 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:12.210109 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:12.210188 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:12.236890 2054169 cri.go:96] found id: ""
	I1222 01:18:12.236914 2054169 logs.go:282] 0 containers: []
	W1222 01:18:12.236923 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:12.236930 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:12.236988 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:12.263828 2054169 cri.go:96] found id: ""
	I1222 01:18:12.263850 2054169 logs.go:282] 0 containers: []
	W1222 01:18:12.263858 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:12.263868 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:12.263928 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:12.291012 2054169 cri.go:96] found id: ""
	I1222 01:18:12.291035 2054169 logs.go:282] 0 containers: []
	W1222 01:18:12.291044 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:12.291051 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:12.291110 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:12.320467 2054169 cri.go:96] found id: ""
	I1222 01:18:12.320489 2054169 logs.go:282] 0 containers: []
	W1222 01:18:12.320498 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:12.320504 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:12.320564 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:12.357859 2054169 cri.go:96] found id: ""
	I1222 01:18:12.357881 2054169 logs.go:282] 0 containers: []
	W1222 01:18:12.357890 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:12.357896 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:12.357959 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:12.395702 2054169 cri.go:96] found id: ""
	I1222 01:18:12.395723 2054169 logs.go:282] 0 containers: []
	W1222 01:18:12.395733 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:12.395739 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:12.395806 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:12.425851 2054169 cri.go:96] found id: ""
	I1222 01:18:12.425875 2054169 logs.go:282] 0 containers: []
	W1222 01:18:12.425883 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:12.425890 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:12.425949 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:12.450987 2054169 cri.go:96] found id: ""
	I1222 01:18:12.451010 2054169 logs.go:282] 0 containers: []
	W1222 01:18:12.451018 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:12.451027 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:12.451073 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:12.467598 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:12.467628 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:12.535982 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:12.536058 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:12.536082 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:12.567564 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:12.567599 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:12.596038 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:12.596064 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:15.163498 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:15.173589 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:15.173668 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:15.204094 2054169 cri.go:96] found id: ""
	I1222 01:18:15.204118 2054169 logs.go:282] 0 containers: []
	W1222 01:18:15.204127 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:15.204133 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:15.204199 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:15.231433 2054169 cri.go:96] found id: ""
	I1222 01:18:15.231457 2054169 logs.go:282] 0 containers: []
	W1222 01:18:15.231465 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:15.231471 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:15.231533 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:15.260833 2054169 cri.go:96] found id: ""
	I1222 01:18:15.260858 2054169 logs.go:282] 0 containers: []
	W1222 01:18:15.260866 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:15.260873 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:15.260938 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:15.287329 2054169 cri.go:96] found id: ""
	I1222 01:18:15.287353 2054169 logs.go:282] 0 containers: []
	W1222 01:18:15.287362 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:15.287369 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:15.287432 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:15.313845 2054169 cri.go:96] found id: ""
	I1222 01:18:15.313870 2054169 logs.go:282] 0 containers: []
	W1222 01:18:15.313878 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:15.313885 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:15.313948 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:15.340173 2054169 cri.go:96] found id: ""
	I1222 01:18:15.340200 2054169 logs.go:282] 0 containers: []
	W1222 01:18:15.340209 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:15.340216 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:15.340276 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:15.367959 2054169 cri.go:96] found id: ""
	I1222 01:18:15.367983 2054169 logs.go:282] 0 containers: []
	W1222 01:18:15.367993 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:15.367999 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:15.368064 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:15.400225 2054169 cri.go:96] found id: ""
	I1222 01:18:15.400251 2054169 logs.go:282] 0 containers: []
	W1222 01:18:15.400260 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:15.400269 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:15.400281 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:15.475128 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:15.475169 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:15.491333 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:15.491361 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:15.560858 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:15.560878 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:15.560891 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:15.590204 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:15.590233 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:18.121737 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:18.133233 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:18.133308 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:18.160023 2054169 cri.go:96] found id: ""
	I1222 01:18:18.160049 2054169 logs.go:282] 0 containers: []
	W1222 01:18:18.160058 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:18.160065 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:18.160126 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:18.188462 2054169 cri.go:96] found id: ""
	I1222 01:18:18.188489 2054169 logs.go:282] 0 containers: []
	W1222 01:18:18.188498 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:18.188504 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:18.188566 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:18.214819 2054169 cri.go:96] found id: ""
	I1222 01:18:18.214842 2054169 logs.go:282] 0 containers: []
	W1222 01:18:18.214851 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:18.214858 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:18.214919 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:18.240105 2054169 cri.go:96] found id: ""
	I1222 01:18:18.240130 2054169 logs.go:282] 0 containers: []
	W1222 01:18:18.240138 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:18.240145 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:18.240205 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:18.274557 2054169 cri.go:96] found id: ""
	I1222 01:18:18.274578 2054169 logs.go:282] 0 containers: []
	W1222 01:18:18.274587 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:18.274593 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:18.274653 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:18.299731 2054169 cri.go:96] found id: ""
	I1222 01:18:18.299755 2054169 logs.go:282] 0 containers: []
	W1222 01:18:18.299764 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:18.299806 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:18.299883 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:18.324942 2054169 cri.go:96] found id: ""
	I1222 01:18:18.324966 2054169 logs.go:282] 0 containers: []
	W1222 01:18:18.324976 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:18.324982 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:18.325043 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:18.360469 2054169 cri.go:96] found id: ""
	I1222 01:18:18.360497 2054169 logs.go:282] 0 containers: []
	W1222 01:18:18.360507 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:18.360516 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:18.360528 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:18.402802 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:18.402838 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:18.431327 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:18.431353 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:18.498151 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:18.498187 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:18.514392 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:18.514424 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:18.578373 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:21.078617 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:21.090484 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:21.090646 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:21.128569 2054169 cri.go:96] found id: ""
	I1222 01:18:21.128605 2054169 logs.go:282] 0 containers: []
	W1222 01:18:21.128614 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:21.128621 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:21.128681 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:21.175892 2054169 cri.go:96] found id: ""
	I1222 01:18:21.175914 2054169 logs.go:282] 0 containers: []
	W1222 01:18:21.175922 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:21.175929 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:21.175986 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:21.211739 2054169 cri.go:96] found id: ""
	I1222 01:18:21.211761 2054169 logs.go:282] 0 containers: []
	W1222 01:18:21.211773 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:21.211780 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:21.211846 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:21.252263 2054169 cri.go:96] found id: ""
	I1222 01:18:21.252285 2054169 logs.go:282] 0 containers: []
	W1222 01:18:21.252293 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:21.252300 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:21.252358 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:21.296063 2054169 cri.go:96] found id: ""
	I1222 01:18:21.296084 2054169 logs.go:282] 0 containers: []
	W1222 01:18:21.296100 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:21.296106 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:21.296166 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:21.334856 2054169 cri.go:96] found id: ""
	I1222 01:18:21.334939 2054169 logs.go:282] 0 containers: []
	W1222 01:18:21.334953 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:21.334987 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:21.335089 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:21.407646 2054169 cri.go:96] found id: ""
	I1222 01:18:21.407667 2054169 logs.go:282] 0 containers: []
	W1222 01:18:21.407676 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:21.407682 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:21.407741 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:21.474777 2054169 cri.go:96] found id: ""
	I1222 01:18:21.474800 2054169 logs.go:282] 0 containers: []
	W1222 01:18:21.474808 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:21.474816 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:21.474833 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:21.507688 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:21.507721 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:21.538139 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:21.538165 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:21.607723 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:21.607761 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:21.623641 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:21.623672 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:21.688846 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:24.189437 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:24.203960 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:24.204072 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:24.255897 2054169 cri.go:96] found id: ""
	I1222 01:18:24.255936 2054169 logs.go:282] 0 containers: []
	W1222 01:18:24.255945 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:24.255951 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:24.256027 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:24.300281 2054169 cri.go:96] found id: ""
	I1222 01:18:24.300320 2054169 logs.go:282] 0 containers: []
	W1222 01:18:24.300331 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:24.300338 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:24.300409 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:24.352209 2054169 cri.go:96] found id: ""
	I1222 01:18:24.352252 2054169 logs.go:282] 0 containers: []
	W1222 01:18:24.352261 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:24.352268 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:24.352343 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:24.388698 2054169 cri.go:96] found id: ""
	I1222 01:18:24.388734 2054169 logs.go:282] 0 containers: []
	W1222 01:18:24.388744 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:24.388751 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:24.388828 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:24.451203 2054169 cri.go:96] found id: ""
	I1222 01:18:24.451284 2054169 logs.go:282] 0 containers: []
	W1222 01:18:24.451307 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:24.451329 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:24.451440 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:24.514887 2054169 cri.go:96] found id: ""
	I1222 01:18:24.514958 2054169 logs.go:282] 0 containers: []
	W1222 01:18:24.514981 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:24.515002 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:24.515092 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:24.545710 2054169 cri.go:96] found id: ""
	I1222 01:18:24.545787 2054169 logs.go:282] 0 containers: []
	W1222 01:18:24.545809 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:24.545829 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:24.545941 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:24.606252 2054169 cri.go:96] found id: ""
	I1222 01:18:24.606281 2054169 logs.go:282] 0 containers: []
	W1222 01:18:24.606309 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:24.606318 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:24.606334 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:24.682642 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:24.682720 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:24.774175 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:24.774216 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:24.794743 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:24.794768 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:24.918230 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:24.918303 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:24.918329 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:27.462677 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:27.474309 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:27.474383 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:27.503609 2054169 cri.go:96] found id: ""
	I1222 01:18:27.503633 2054169 logs.go:282] 0 containers: []
	W1222 01:18:27.503643 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:27.503649 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:27.503720 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:27.532200 2054169 cri.go:96] found id: ""
	I1222 01:18:27.532228 2054169 logs.go:282] 0 containers: []
	W1222 01:18:27.532237 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:27.532245 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:27.532311 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:27.562986 2054169 cri.go:96] found id: ""
	I1222 01:18:27.563009 2054169 logs.go:282] 0 containers: []
	W1222 01:18:27.563017 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:27.563024 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:27.563086 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:27.588449 2054169 cri.go:96] found id: ""
	I1222 01:18:27.588474 2054169 logs.go:282] 0 containers: []
	W1222 01:18:27.588483 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:27.588489 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:27.588550 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:27.615669 2054169 cri.go:96] found id: ""
	I1222 01:18:27.615697 2054169 logs.go:282] 0 containers: []
	W1222 01:18:27.615706 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:27.615712 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:27.615772 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:27.643899 2054169 cri.go:96] found id: ""
	I1222 01:18:27.643926 2054169 logs.go:282] 0 containers: []
	W1222 01:18:27.643935 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:27.643941 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:27.643998 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:27.670915 2054169 cri.go:96] found id: ""
	I1222 01:18:27.670938 2054169 logs.go:282] 0 containers: []
	W1222 01:18:27.670947 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:27.670953 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:27.671015 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:27.698733 2054169 cri.go:96] found id: ""
	I1222 01:18:27.698757 2054169 logs.go:282] 0 containers: []
	W1222 01:18:27.698766 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:27.698775 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:27.698789 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:27.767260 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:27.767281 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:27.767293 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:27.798455 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:27.798500 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:27.825586 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:27.825615 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:27.894841 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:27.894879 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:30.421456 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:30.433940 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:30.434027 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:30.476349 2054169 cri.go:96] found id: ""
	I1222 01:18:30.476372 2054169 logs.go:282] 0 containers: []
	W1222 01:18:30.476380 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:30.476386 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:30.476450 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:30.525703 2054169 cri.go:96] found id: ""
	I1222 01:18:30.525725 2054169 logs.go:282] 0 containers: []
	W1222 01:18:30.525733 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:30.525739 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:30.525800 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:30.564025 2054169 cri.go:96] found id: ""
	I1222 01:18:30.564048 2054169 logs.go:282] 0 containers: []
	W1222 01:18:30.564056 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:30.564063 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:30.564124 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:30.604312 2054169 cri.go:96] found id: ""
	I1222 01:18:30.604335 2054169 logs.go:282] 0 containers: []
	W1222 01:18:30.604344 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:30.604350 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:30.604417 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:30.645304 2054169 cri.go:96] found id: ""
	I1222 01:18:30.645560 2054169 logs.go:282] 0 containers: []
	W1222 01:18:30.645576 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:30.645583 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:30.645659 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:30.679056 2054169 cri.go:96] found id: ""
	I1222 01:18:30.679078 2054169 logs.go:282] 0 containers: []
	W1222 01:18:30.679086 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:30.679093 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:30.679162 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:30.705836 2054169 cri.go:96] found id: ""
	I1222 01:18:30.705858 2054169 logs.go:282] 0 containers: []
	W1222 01:18:30.705866 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:30.705872 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:30.705931 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:30.736226 2054169 cri.go:96] found id: ""
	I1222 01:18:30.736255 2054169 logs.go:282] 0 containers: []
	W1222 01:18:30.736274 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:30.736282 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:30.736294 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:30.809812 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:30.809856 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:30.826275 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:30.826303 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:30.892159 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:30.892228 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:30.892247 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:30.926196 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:30.926233 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:33.453921 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:33.466839 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:33.466940 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:33.495338 2054169 cri.go:96] found id: ""
	I1222 01:18:33.495361 2054169 logs.go:282] 0 containers: []
	W1222 01:18:33.495403 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:33.495417 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:33.495499 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:33.521268 2054169 cri.go:96] found id: ""
	I1222 01:18:33.521294 2054169 logs.go:282] 0 containers: []
	W1222 01:18:33.521312 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:33.521350 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:33.521424 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:33.552119 2054169 cri.go:96] found id: ""
	I1222 01:18:33.552145 2054169 logs.go:282] 0 containers: []
	W1222 01:18:33.552153 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:33.552168 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:33.552248 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:33.577410 2054169 cri.go:96] found id: ""
	I1222 01:18:33.577449 2054169 logs.go:282] 0 containers: []
	W1222 01:18:33.577459 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:33.577466 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:33.577541 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:33.613586 2054169 cri.go:96] found id: ""
	I1222 01:18:33.613666 2054169 logs.go:282] 0 containers: []
	W1222 01:18:33.613688 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:33.613708 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:33.613800 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:33.642288 2054169 cri.go:96] found id: ""
	I1222 01:18:33.642357 2054169 logs.go:282] 0 containers: []
	W1222 01:18:33.642378 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:33.642398 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:33.642489 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:33.671784 2054169 cri.go:96] found id: ""
	I1222 01:18:33.671809 2054169 logs.go:282] 0 containers: []
	W1222 01:18:33.671818 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:33.671824 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:33.671894 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:33.696605 2054169 cri.go:96] found id: ""
	I1222 01:18:33.696686 2054169 logs.go:282] 0 containers: []
	W1222 01:18:33.696709 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:33.696732 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:33.696774 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:33.764749 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:33.764783 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:33.780857 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:33.780888 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:33.844522 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:33.844541 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:33.844554 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:33.875521 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:33.875554 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:36.403930 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:36.415093 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:36.415167 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:36.451909 2054169 cri.go:96] found id: ""
	I1222 01:18:36.451935 2054169 logs.go:282] 0 containers: []
	W1222 01:18:36.451946 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:36.451953 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:36.452014 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:36.495575 2054169 cri.go:96] found id: ""
	I1222 01:18:36.495600 2054169 logs.go:282] 0 containers: []
	W1222 01:18:36.495609 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:36.495614 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:36.495676 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:36.529398 2054169 cri.go:96] found id: ""
	I1222 01:18:36.529425 2054169 logs.go:282] 0 containers: []
	W1222 01:18:36.529434 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:36.529441 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:36.529501 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:36.563770 2054169 cri.go:96] found id: ""
	I1222 01:18:36.563797 2054169 logs.go:282] 0 containers: []
	W1222 01:18:36.563806 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:36.563812 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:36.563872 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:36.591645 2054169 cri.go:96] found id: ""
	I1222 01:18:36.591670 2054169 logs.go:282] 0 containers: []
	W1222 01:18:36.591680 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:36.591686 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:36.591748 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:36.633074 2054169 cri.go:96] found id: ""
	I1222 01:18:36.633100 2054169 logs.go:282] 0 containers: []
	W1222 01:18:36.633109 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:36.633117 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:36.633180 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:36.659948 2054169 cri.go:96] found id: ""
	I1222 01:18:36.659976 2054169 logs.go:282] 0 containers: []
	W1222 01:18:36.659985 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:36.659992 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:36.660055 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:36.685732 2054169 cri.go:96] found id: ""
	I1222 01:18:36.685754 2054169 logs.go:282] 0 containers: []
	W1222 01:18:36.685775 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:36.685785 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:36.685797 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:36.702199 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:36.702227 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:36.771159 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:36.771177 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:36.771188 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:36.801999 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:36.802034 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:36.832353 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:36.832381 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:39.408240 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:39.418415 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:39.418488 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:39.447763 2054169 cri.go:96] found id: ""
	I1222 01:18:39.447829 2054169 logs.go:282] 0 containers: []
	W1222 01:18:39.447852 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:39.447872 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:39.447964 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:39.481485 2054169 cri.go:96] found id: ""
	I1222 01:18:39.481510 2054169 logs.go:282] 0 containers: []
	W1222 01:18:39.481519 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:39.481525 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:39.481588 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:39.510653 2054169 cri.go:96] found id: ""
	I1222 01:18:39.510676 2054169 logs.go:282] 0 containers: []
	W1222 01:18:39.510685 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:39.510693 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:39.510753 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:39.537443 2054169 cri.go:96] found id: ""
	I1222 01:18:39.537470 2054169 logs.go:282] 0 containers: []
	W1222 01:18:39.537479 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:39.537485 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:39.537545 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:39.565200 2054169 cri.go:96] found id: ""
	I1222 01:18:39.565226 2054169 logs.go:282] 0 containers: []
	W1222 01:18:39.565235 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:39.565241 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:39.565307 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:39.592314 2054169 cri.go:96] found id: ""
	I1222 01:18:39.592343 2054169 logs.go:282] 0 containers: []
	W1222 01:18:39.592352 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:39.592358 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:39.592421 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:39.624782 2054169 cri.go:96] found id: ""
	I1222 01:18:39.624809 2054169 logs.go:282] 0 containers: []
	W1222 01:18:39.624818 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:39.624824 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:39.624884 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:39.658113 2054169 cri.go:96] found id: ""
	I1222 01:18:39.658141 2054169 logs.go:282] 0 containers: []
	W1222 01:18:39.658149 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:39.658157 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:39.658171 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:39.727932 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:39.727972 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:39.746782 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:39.746820 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:39.817585 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:39.817610 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:39.817622 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:39.848916 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:39.848954 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:42.378209 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:42.390124 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:42.390203 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:42.420165 2054169 cri.go:96] found id: ""
	I1222 01:18:42.420191 2054169 logs.go:282] 0 containers: []
	W1222 01:18:42.420200 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:42.420207 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:42.420266 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:42.445746 2054169 cri.go:96] found id: ""
	I1222 01:18:42.445769 2054169 logs.go:282] 0 containers: []
	W1222 01:18:42.445777 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:42.445784 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:42.445845 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:42.473603 2054169 cri.go:96] found id: ""
	I1222 01:18:42.473635 2054169 logs.go:282] 0 containers: []
	W1222 01:18:42.473644 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:42.473652 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:42.473727 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:42.498284 2054169 cri.go:96] found id: ""
	I1222 01:18:42.498308 2054169 logs.go:282] 0 containers: []
	W1222 01:18:42.498317 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:42.498323 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:42.498381 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:42.528757 2054169 cri.go:96] found id: ""
	I1222 01:18:42.528783 2054169 logs.go:282] 0 containers: []
	W1222 01:18:42.528793 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:42.528799 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:42.528859 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:42.556111 2054169 cri.go:96] found id: ""
	I1222 01:18:42.556136 2054169 logs.go:282] 0 containers: []
	W1222 01:18:42.556145 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:42.556151 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:42.556212 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:42.582873 2054169 cri.go:96] found id: ""
	I1222 01:18:42.582897 2054169 logs.go:282] 0 containers: []
	W1222 01:18:42.582906 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:42.582913 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:42.582975 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:42.628172 2054169 cri.go:96] found id: ""
	I1222 01:18:42.628194 2054169 logs.go:282] 0 containers: []
	W1222 01:18:42.628203 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:42.628212 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:42.628223 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:42.662163 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:42.662202 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:42.692540 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:42.692566 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:42.762695 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:42.762730 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:42.781152 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:42.781181 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:42.844327 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:45.345534 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:45.358383 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:45.358457 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:45.385673 2054169 cri.go:96] found id: ""
	I1222 01:18:45.385697 2054169 logs.go:282] 0 containers: []
	W1222 01:18:45.385706 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:45.385718 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:45.385782 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:45.413238 2054169 cri.go:96] found id: ""
	I1222 01:18:45.413314 2054169 logs.go:282] 0 containers: []
	W1222 01:18:45.413360 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:45.413380 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:45.413455 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:45.440434 2054169 cri.go:96] found id: ""
	I1222 01:18:45.440459 2054169 logs.go:282] 0 containers: []
	W1222 01:18:45.440467 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:45.440474 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:45.440533 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:45.474857 2054169 cri.go:96] found id: ""
	I1222 01:18:45.474880 2054169 logs.go:282] 0 containers: []
	W1222 01:18:45.474888 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:45.474895 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:45.474960 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:45.500006 2054169 cri.go:96] found id: ""
	I1222 01:18:45.500033 2054169 logs.go:282] 0 containers: []
	W1222 01:18:45.500044 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:45.500051 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:45.500112 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:45.527473 2054169 cri.go:96] found id: ""
	I1222 01:18:45.527499 2054169 logs.go:282] 0 containers: []
	W1222 01:18:45.527509 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:45.527515 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:45.527576 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:45.554337 2054169 cri.go:96] found id: ""
	I1222 01:18:45.554362 2054169 logs.go:282] 0 containers: []
	W1222 01:18:45.554370 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:45.554377 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:45.554441 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:45.582421 2054169 cri.go:96] found id: ""
	I1222 01:18:45.582446 2054169 logs.go:282] 0 containers: []
	W1222 01:18:45.582455 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:45.582464 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:45.582494 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:45.657546 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:45.657581 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:45.674170 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:45.674199 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:45.739541 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:45.739563 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:45.739579 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:45.772211 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:45.772243 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:48.303945 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:48.313851 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:48.313922 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:48.343594 2054169 cri.go:96] found id: ""
	I1222 01:18:48.343617 2054169 logs.go:282] 0 containers: []
	W1222 01:18:48.343625 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:48.343632 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:48.343690 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:48.368466 2054169 cri.go:96] found id: ""
	I1222 01:18:48.368492 2054169 logs.go:282] 0 containers: []
	W1222 01:18:48.368501 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:48.368507 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:48.368568 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:48.393721 2054169 cri.go:96] found id: ""
	I1222 01:18:48.393744 2054169 logs.go:282] 0 containers: []
	W1222 01:18:48.393752 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:48.393758 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:48.393821 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:48.419629 2054169 cri.go:96] found id: ""
	I1222 01:18:48.419654 2054169 logs.go:282] 0 containers: []
	W1222 01:18:48.419662 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:48.419669 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:48.419728 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:48.445882 2054169 cri.go:96] found id: ""
	I1222 01:18:48.445910 2054169 logs.go:282] 0 containers: []
	W1222 01:18:48.445918 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:48.445925 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:48.445984 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:48.482064 2054169 cri.go:96] found id: ""
	I1222 01:18:48.482088 2054169 logs.go:282] 0 containers: []
	W1222 01:18:48.482096 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:48.482102 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:48.482160 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:48.507365 2054169 cri.go:96] found id: ""
	I1222 01:18:48.507388 2054169 logs.go:282] 0 containers: []
	W1222 01:18:48.507396 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:48.507403 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:48.507466 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:48.537538 2054169 cri.go:96] found id: ""
	I1222 01:18:48.537563 2054169 logs.go:282] 0 containers: []
	W1222 01:18:48.537573 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:48.537580 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:48.537592 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:48.568608 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:48.568642 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:48.609922 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:48.609950 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:48.681169 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:48.681206 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:48.697607 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:48.697635 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:48.761407 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:51.261651 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:51.272082 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:51.272239 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:51.303671 2054169 cri.go:96] found id: ""
	I1222 01:18:51.303708 2054169 logs.go:282] 0 containers: []
	W1222 01:18:51.303717 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:51.303724 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:51.303870 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:51.334253 2054169 cri.go:96] found id: ""
	I1222 01:18:51.334282 2054169 logs.go:282] 0 containers: []
	W1222 01:18:51.334325 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:51.334339 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:51.334443 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:51.369045 2054169 cri.go:96] found id: ""
	I1222 01:18:51.369071 2054169 logs.go:282] 0 containers: []
	W1222 01:18:51.369079 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:51.369086 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:51.369151 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:51.412997 2054169 cri.go:96] found id: ""
	I1222 01:18:51.413024 2054169 logs.go:282] 0 containers: []
	W1222 01:18:51.413045 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:51.413053 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:51.413117 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:51.446176 2054169 cri.go:96] found id: ""
	I1222 01:18:51.446217 2054169 logs.go:282] 0 containers: []
	W1222 01:18:51.446225 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:51.446232 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:51.446299 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:51.499115 2054169 cri.go:96] found id: ""
	I1222 01:18:51.499141 2054169 logs.go:282] 0 containers: []
	W1222 01:18:51.499149 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:51.499155 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:51.499215 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:51.525859 2054169 cri.go:96] found id: ""
	I1222 01:18:51.525884 2054169 logs.go:282] 0 containers: []
	W1222 01:18:51.525892 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:51.525899 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:51.525961 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:51.555937 2054169 cri.go:96] found id: ""
	I1222 01:18:51.555962 2054169 logs.go:282] 0 containers: []
	W1222 01:18:51.555971 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:51.555979 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:51.555993 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:51.630085 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:51.630124 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:51.650865 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:51.650897 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:51.722604 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:51.722627 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:51.722641 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:51.753686 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:51.753723 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:54.283771 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:54.294001 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:54.294085 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:54.322604 2054169 cri.go:96] found id: ""
	I1222 01:18:54.322667 2054169 logs.go:282] 0 containers: []
	W1222 01:18:54.322699 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:54.322722 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:54.322817 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:54.358955 2054169 cri.go:96] found id: ""
	I1222 01:18:54.359032 2054169 logs.go:282] 0 containers: []
	W1222 01:18:54.359064 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:54.359085 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:54.359177 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:54.397706 2054169 cri.go:96] found id: ""
	I1222 01:18:54.397781 2054169 logs.go:282] 0 containers: []
	W1222 01:18:54.397802 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:54.397822 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:54.397909 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:54.454633 2054169 cri.go:96] found id: ""
	I1222 01:18:54.454662 2054169 logs.go:282] 0 containers: []
	W1222 01:18:54.454671 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:54.454677 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:54.454735 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:54.494194 2054169 cri.go:96] found id: ""
	I1222 01:18:54.494216 2054169 logs.go:282] 0 containers: []
	W1222 01:18:54.494225 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:54.494238 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:54.494303 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:54.522655 2054169 cri.go:96] found id: ""
	I1222 01:18:54.522675 2054169 logs.go:282] 0 containers: []
	W1222 01:18:54.522684 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:54.522690 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:54.522764 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:54.553102 2054169 cri.go:96] found id: ""
	I1222 01:18:54.553124 2054169 logs.go:282] 0 containers: []
	W1222 01:18:54.553132 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:54.553139 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:54.553200 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:54.582757 2054169 cri.go:96] found id: ""
	I1222 01:18:54.582778 2054169 logs.go:282] 0 containers: []
	W1222 01:18:54.582787 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:54.582795 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:54.582808 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:18:54.672653 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:54.672686 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:54.708638 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:54.708714 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:54.810447 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:54.810490 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:54.810525 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:54.844211 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:54.844246 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:57.403870 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:18:57.413980 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:18:57.414057 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:18:57.463253 2054169 cri.go:96] found id: ""
	I1222 01:18:57.463276 2054169 logs.go:282] 0 containers: []
	W1222 01:18:57.463292 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:18:57.463299 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:18:57.463361 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:18:57.504097 2054169 cri.go:96] found id: ""
	I1222 01:18:57.504119 2054169 logs.go:282] 0 containers: []
	W1222 01:18:57.504128 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:18:57.504140 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:18:57.504203 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:18:57.533719 2054169 cri.go:96] found id: ""
	I1222 01:18:57.533741 2054169 logs.go:282] 0 containers: []
	W1222 01:18:57.533750 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:18:57.533757 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:18:57.533821 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:18:57.561619 2054169 cri.go:96] found id: ""
	I1222 01:18:57.561645 2054169 logs.go:282] 0 containers: []
	W1222 01:18:57.561654 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:18:57.561660 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:18:57.561719 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:18:57.589681 2054169 cri.go:96] found id: ""
	I1222 01:18:57.589706 2054169 logs.go:282] 0 containers: []
	W1222 01:18:57.589714 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:18:57.589721 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:18:57.589785 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:18:57.670485 2054169 cri.go:96] found id: ""
	I1222 01:18:57.670508 2054169 logs.go:282] 0 containers: []
	W1222 01:18:57.670517 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:18:57.670524 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:18:57.670589 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:18:57.713668 2054169 cri.go:96] found id: ""
	I1222 01:18:57.713690 2054169 logs.go:282] 0 containers: []
	W1222 01:18:57.713698 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:18:57.713704 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:18:57.713762 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:18:57.751628 2054169 cri.go:96] found id: ""
	I1222 01:18:57.751657 2054169 logs.go:282] 0 containers: []
	W1222 01:18:57.751666 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:18:57.751674 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:18:57.751689 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:18:57.769291 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:18:57.769336 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:18:57.864647 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:18:57.864679 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:18:57.864691 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:18:57.899803 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:18:57.899838 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:18:57.944180 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:18:57.944208 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:00.543119 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:00.554137 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:00.554209 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:00.580671 2054169 cri.go:96] found id: ""
	I1222 01:19:00.580697 2054169 logs.go:282] 0 containers: []
	W1222 01:19:00.580707 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:00.580714 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:00.580776 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:00.633350 2054169 cri.go:96] found id: ""
	I1222 01:19:00.633376 2054169 logs.go:282] 0 containers: []
	W1222 01:19:00.633387 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:00.633394 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:00.633453 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:00.663762 2054169 cri.go:96] found id: ""
	I1222 01:19:00.663788 2054169 logs.go:282] 0 containers: []
	W1222 01:19:00.663797 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:00.663804 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:00.663864 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:00.690020 2054169 cri.go:96] found id: ""
	I1222 01:19:00.690045 2054169 logs.go:282] 0 containers: []
	W1222 01:19:00.690053 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:00.690060 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:00.690123 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:00.716446 2054169 cri.go:96] found id: ""
	I1222 01:19:00.716471 2054169 logs.go:282] 0 containers: []
	W1222 01:19:00.716480 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:00.716487 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:00.716556 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:00.743315 2054169 cri.go:96] found id: ""
	I1222 01:19:00.743343 2054169 logs.go:282] 0 containers: []
	W1222 01:19:00.743359 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:00.743366 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:00.743424 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:00.768224 2054169 cri.go:96] found id: ""
	I1222 01:19:00.768249 2054169 logs.go:282] 0 containers: []
	W1222 01:19:00.768258 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:00.768265 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:00.768331 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:00.796423 2054169 cri.go:96] found id: ""
	I1222 01:19:00.796448 2054169 logs.go:282] 0 containers: []
	W1222 01:19:00.796457 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:00.796465 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:00.796477 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:00.812916 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:00.812951 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:00.878503 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:00.878522 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:00.878540 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:00.916574 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:00.916618 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:00.956780 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:00.956812 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:03.538159 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:03.548124 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:03.548193 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:03.580623 2054169 cri.go:96] found id: ""
	I1222 01:19:03.580646 2054169 logs.go:282] 0 containers: []
	W1222 01:19:03.580654 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:03.580660 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:03.580719 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:03.624470 2054169 cri.go:96] found id: ""
	I1222 01:19:03.624492 2054169 logs.go:282] 0 containers: []
	W1222 01:19:03.624501 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:03.624507 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:03.624566 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:03.658501 2054169 cri.go:96] found id: ""
	I1222 01:19:03.658523 2054169 logs.go:282] 0 containers: []
	W1222 01:19:03.658531 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:03.658537 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:03.658597 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:03.685033 2054169 cri.go:96] found id: ""
	I1222 01:19:03.685055 2054169 logs.go:282] 0 containers: []
	W1222 01:19:03.685063 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:03.685070 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:03.685133 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:03.712134 2054169 cri.go:96] found id: ""
	I1222 01:19:03.712156 2054169 logs.go:282] 0 containers: []
	W1222 01:19:03.712164 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:03.712171 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:03.712230 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:03.738838 2054169 cri.go:96] found id: ""
	I1222 01:19:03.738863 2054169 logs.go:282] 0 containers: []
	W1222 01:19:03.738871 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:03.738877 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:03.738935 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:03.771590 2054169 cri.go:96] found id: ""
	I1222 01:19:03.771612 2054169 logs.go:282] 0 containers: []
	W1222 01:19:03.771621 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:03.771627 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:03.771687 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:03.797502 2054169 cri.go:96] found id: ""
	I1222 01:19:03.797525 2054169 logs.go:282] 0 containers: []
	W1222 01:19:03.797533 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:03.797541 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:03.797551 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:03.828495 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:03.828582 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:03.896135 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:03.896170 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:03.913856 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:03.913885 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:03.985090 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:03.985113 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:03.985129 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:06.519822 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:06.529952 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:06.530028 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:06.555958 2054169 cri.go:96] found id: ""
	I1222 01:19:06.555983 2054169 logs.go:282] 0 containers: []
	W1222 01:19:06.555992 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:06.555999 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:06.556060 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:06.583672 2054169 cri.go:96] found id: ""
	I1222 01:19:06.583695 2054169 logs.go:282] 0 containers: []
	W1222 01:19:06.583703 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:06.583709 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:06.583774 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:06.623705 2054169 cri.go:96] found id: ""
	I1222 01:19:06.623729 2054169 logs.go:282] 0 containers: []
	W1222 01:19:06.623738 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:06.623745 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:06.623803 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:06.655636 2054169 cri.go:96] found id: ""
	I1222 01:19:06.655663 2054169 logs.go:282] 0 containers: []
	W1222 01:19:06.655671 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:06.655678 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:06.655744 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:06.680748 2054169 cri.go:96] found id: ""
	I1222 01:19:06.680775 2054169 logs.go:282] 0 containers: []
	W1222 01:19:06.680783 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:06.680790 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:06.680911 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:06.708142 2054169 cri.go:96] found id: ""
	I1222 01:19:06.708169 2054169 logs.go:282] 0 containers: []
	W1222 01:19:06.708178 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:06.708184 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:06.708245 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:06.737184 2054169 cri.go:96] found id: ""
	I1222 01:19:06.737211 2054169 logs.go:282] 0 containers: []
	W1222 01:19:06.737220 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:06.737226 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:06.737291 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:06.763061 2054169 cri.go:96] found id: ""
	I1222 01:19:06.763088 2054169 logs.go:282] 0 containers: []
	W1222 01:19:06.763096 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:06.763106 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:06.763117 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:06.792439 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:06.792467 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:06.862622 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:06.862673 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:06.879247 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:06.879285 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:06.941973 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:06.941992 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:06.942005 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:09.475032 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:09.485030 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:09.485107 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:09.513400 2054169 cri.go:96] found id: ""
	I1222 01:19:09.513426 2054169 logs.go:282] 0 containers: []
	W1222 01:19:09.513435 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:09.513442 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:09.513507 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:09.540274 2054169 cri.go:96] found id: ""
	I1222 01:19:09.540297 2054169 logs.go:282] 0 containers: []
	W1222 01:19:09.540306 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:09.540312 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:09.540373 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:09.567000 2054169 cri.go:96] found id: ""
	I1222 01:19:09.567026 2054169 logs.go:282] 0 containers: []
	W1222 01:19:09.567035 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:09.567041 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:09.567103 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:09.599116 2054169 cri.go:96] found id: ""
	I1222 01:19:09.599141 2054169 logs.go:282] 0 containers: []
	W1222 01:19:09.599150 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:09.599157 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:09.599219 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:09.634139 2054169 cri.go:96] found id: ""
	I1222 01:19:09.634164 2054169 logs.go:282] 0 containers: []
	W1222 01:19:09.634173 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:09.634179 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:09.634240 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:09.668990 2054169 cri.go:96] found id: ""
	I1222 01:19:09.669016 2054169 logs.go:282] 0 containers: []
	W1222 01:19:09.669025 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:09.669031 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:09.669091 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:09.694740 2054169 cri.go:96] found id: ""
	I1222 01:19:09.694771 2054169 logs.go:282] 0 containers: []
	W1222 01:19:09.694780 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:09.694787 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:09.694878 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:09.720830 2054169 cri.go:96] found id: ""
	I1222 01:19:09.720856 2054169 logs.go:282] 0 containers: []
	W1222 01:19:09.720864 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:09.720873 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:09.720885 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:09.789598 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:09.789639 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:09.806551 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:09.806582 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:09.873541 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:09.873602 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:09.873621 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:09.904249 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:09.904283 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:12.435090 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:12.445358 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:12.445434 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:12.474291 2054169 cri.go:96] found id: ""
	I1222 01:19:12.474317 2054169 logs.go:282] 0 containers: []
	W1222 01:19:12.474326 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:12.474333 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:12.474393 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:12.498834 2054169 cri.go:96] found id: ""
	I1222 01:19:12.498859 2054169 logs.go:282] 0 containers: []
	W1222 01:19:12.498869 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:12.498876 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:12.498950 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:12.524803 2054169 cri.go:96] found id: ""
	I1222 01:19:12.524829 2054169 logs.go:282] 0 containers: []
	W1222 01:19:12.524837 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:12.524844 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:12.524905 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:12.549948 2054169 cri.go:96] found id: ""
	I1222 01:19:12.549971 2054169 logs.go:282] 0 containers: []
	W1222 01:19:12.549980 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:12.549986 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:12.550050 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:12.576004 2054169 cri.go:96] found id: ""
	I1222 01:19:12.576037 2054169 logs.go:282] 0 containers: []
	W1222 01:19:12.576053 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:12.576060 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:12.576132 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:12.617979 2054169 cri.go:96] found id: ""
	I1222 01:19:12.618004 2054169 logs.go:282] 0 containers: []
	W1222 01:19:12.618013 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:12.618019 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:12.618078 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:12.647800 2054169 cri.go:96] found id: ""
	I1222 01:19:12.647825 2054169 logs.go:282] 0 containers: []
	W1222 01:19:12.647834 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:12.647841 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:12.647902 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:12.677111 2054169 cri.go:96] found id: ""
	I1222 01:19:12.677137 2054169 logs.go:282] 0 containers: []
	W1222 01:19:12.677145 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:12.677154 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:12.677165 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:12.704704 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:12.704731 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:12.771617 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:12.771653 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:12.788741 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:12.788774 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:12.861478 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:12.861501 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:12.861542 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:15.396623 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:15.406589 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:15.406665 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:15.435703 2054169 cri.go:96] found id: ""
	I1222 01:19:15.435764 2054169 logs.go:282] 0 containers: []
	W1222 01:19:15.435776 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:15.435783 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:15.435850 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:15.474199 2054169 cri.go:96] found id: ""
	I1222 01:19:15.474223 2054169 logs.go:282] 0 containers: []
	W1222 01:19:15.474232 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:15.474238 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:15.474297 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:15.504461 2054169 cri.go:96] found id: ""
	I1222 01:19:15.504486 2054169 logs.go:282] 0 containers: []
	W1222 01:19:15.504495 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:15.504502 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:15.504564 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:15.529956 2054169 cri.go:96] found id: ""
	I1222 01:19:15.529982 2054169 logs.go:282] 0 containers: []
	W1222 01:19:15.529990 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:15.529997 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:15.530063 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:15.558655 2054169 cri.go:96] found id: ""
	I1222 01:19:15.558680 2054169 logs.go:282] 0 containers: []
	W1222 01:19:15.558689 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:15.558695 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:15.558763 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:15.584786 2054169 cri.go:96] found id: ""
	I1222 01:19:15.584812 2054169 logs.go:282] 0 containers: []
	W1222 01:19:15.584821 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:15.584827 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:15.584889 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:15.614120 2054169 cri.go:96] found id: ""
	I1222 01:19:15.614144 2054169 logs.go:282] 0 containers: []
	W1222 01:19:15.614153 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:15.614159 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:15.614222 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:15.655205 2054169 cri.go:96] found id: ""
	I1222 01:19:15.655230 2054169 logs.go:282] 0 containers: []
	W1222 01:19:15.655239 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:15.655247 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:15.655259 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:15.725780 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:15.725799 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:15.725814 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:15.759628 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:15.759663 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:15.790942 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:15.790971 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:15.862290 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:15.862336 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:18.379949 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:18.390108 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:18.390181 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:18.417119 2054169 cri.go:96] found id: ""
	I1222 01:19:18.417142 2054169 logs.go:282] 0 containers: []
	W1222 01:19:18.417151 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:18.417157 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:18.417217 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:18.441732 2054169 cri.go:96] found id: ""
	I1222 01:19:18.441758 2054169 logs.go:282] 0 containers: []
	W1222 01:19:18.441767 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:18.441773 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:18.441837 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:18.473379 2054169 cri.go:96] found id: ""
	I1222 01:19:18.473406 2054169 logs.go:282] 0 containers: []
	W1222 01:19:18.473414 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:18.473421 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:18.473483 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:18.499259 2054169 cri.go:96] found id: ""
	I1222 01:19:18.499284 2054169 logs.go:282] 0 containers: []
	W1222 01:19:18.499293 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:18.499300 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:18.499395 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:18.528564 2054169 cri.go:96] found id: ""
	I1222 01:19:18.528594 2054169 logs.go:282] 0 containers: []
	W1222 01:19:18.528603 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:18.528610 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:18.528675 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:18.554733 2054169 cri.go:96] found id: ""
	I1222 01:19:18.554760 2054169 logs.go:282] 0 containers: []
	W1222 01:19:18.554769 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:18.554775 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:18.554835 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:18.579148 2054169 cri.go:96] found id: ""
	I1222 01:19:18.579171 2054169 logs.go:282] 0 containers: []
	W1222 01:19:18.579180 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:18.579187 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:18.579253 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:18.617219 2054169 cri.go:96] found id: ""
	I1222 01:19:18.617304 2054169 logs.go:282] 0 containers: []
	W1222 01:19:18.617353 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:18.617377 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:18.617404 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:18.692748 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:18.692787 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:18.709437 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:18.709467 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:18.783254 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:18.783278 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:18.783291 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:18.814861 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:18.814895 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:21.343134 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:21.353660 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:21.353729 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:21.382003 2054169 cri.go:96] found id: ""
	I1222 01:19:21.382032 2054169 logs.go:282] 0 containers: []
	W1222 01:19:21.382041 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:21.382047 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:21.382106 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:21.410191 2054169 cri.go:96] found id: ""
	I1222 01:19:21.410217 2054169 logs.go:282] 0 containers: []
	W1222 01:19:21.410226 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:21.410233 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:21.410294 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:21.451334 2054169 cri.go:96] found id: ""
	I1222 01:19:21.451359 2054169 logs.go:282] 0 containers: []
	W1222 01:19:21.451368 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:21.451375 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:21.451435 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:21.497902 2054169 cri.go:96] found id: ""
	I1222 01:19:21.497929 2054169 logs.go:282] 0 containers: []
	W1222 01:19:21.497938 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:21.497944 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:21.498004 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:21.528145 2054169 cri.go:96] found id: ""
	I1222 01:19:21.528181 2054169 logs.go:282] 0 containers: []
	W1222 01:19:21.528190 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:21.528197 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:21.528290 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:21.569063 2054169 cri.go:96] found id: ""
	I1222 01:19:21.569090 2054169 logs.go:282] 0 containers: []
	W1222 01:19:21.569099 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:21.569105 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:21.569165 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:21.614037 2054169 cri.go:96] found id: ""
	I1222 01:19:21.614060 2054169 logs.go:282] 0 containers: []
	W1222 01:19:21.614068 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:21.614075 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:21.614135 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:21.661146 2054169 cri.go:96] found id: ""
	I1222 01:19:21.661172 2054169 logs.go:282] 0 containers: []
	W1222 01:19:21.661181 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:21.661189 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:21.661201 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:21.685104 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:21.685133 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:21.766574 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:21.766595 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:21.766608 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:21.798301 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:21.798339 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:21.830170 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:21.830195 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:24.399093 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:24.413305 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:24.413392 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:24.456883 2054169 cri.go:96] found id: ""
	I1222 01:19:24.456910 2054169 logs.go:282] 0 containers: []
	W1222 01:19:24.456919 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:24.456926 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:24.456986 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:24.499586 2054169 cri.go:96] found id: ""
	I1222 01:19:24.499611 2054169 logs.go:282] 0 containers: []
	W1222 01:19:24.499620 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:24.499626 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:24.499684 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:24.525862 2054169 cri.go:96] found id: ""
	I1222 01:19:24.525888 2054169 logs.go:282] 0 containers: []
	W1222 01:19:24.525897 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:24.525903 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:24.525964 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:24.566712 2054169 cri.go:96] found id: ""
	I1222 01:19:24.566739 2054169 logs.go:282] 0 containers: []
	W1222 01:19:24.566748 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:24.566755 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:24.566817 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:24.605511 2054169 cri.go:96] found id: ""
	I1222 01:19:24.605539 2054169 logs.go:282] 0 containers: []
	W1222 01:19:24.605547 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:24.605560 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:24.605649 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:24.690626 2054169 cri.go:96] found id: ""
	I1222 01:19:24.690651 2054169 logs.go:282] 0 containers: []
	W1222 01:19:24.690659 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:24.690666 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:24.690724 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:24.726384 2054169 cri.go:96] found id: ""
	I1222 01:19:24.726407 2054169 logs.go:282] 0 containers: []
	W1222 01:19:24.726415 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:24.726422 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:24.726482 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:24.756361 2054169 cri.go:96] found id: ""
	I1222 01:19:24.756387 2054169 logs.go:282] 0 containers: []
	W1222 01:19:24.756396 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:24.756404 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:24.756417 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:24.772479 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:24.772509 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:24.871741 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:24.871762 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:24.871775 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:24.905063 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:24.905091 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:24.942964 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:24.942994 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:27.527855 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:27.538371 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:27.538446 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:27.564269 2054169 cri.go:96] found id: ""
	I1222 01:19:27.564296 2054169 logs.go:282] 0 containers: []
	W1222 01:19:27.564305 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:27.564311 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:27.564375 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:27.589549 2054169 cri.go:96] found id: ""
	I1222 01:19:27.589578 2054169 logs.go:282] 0 containers: []
	W1222 01:19:27.589587 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:27.589595 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:27.589655 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:27.617754 2054169 cri.go:96] found id: ""
	I1222 01:19:27.617782 2054169 logs.go:282] 0 containers: []
	W1222 01:19:27.617791 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:27.617798 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:27.617859 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:27.650490 2054169 cri.go:96] found id: ""
	I1222 01:19:27.650519 2054169 logs.go:282] 0 containers: []
	W1222 01:19:27.650528 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:27.650535 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:27.650601 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:27.676992 2054169 cri.go:96] found id: ""
	I1222 01:19:27.677027 2054169 logs.go:282] 0 containers: []
	W1222 01:19:27.677036 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:27.677043 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:27.677115 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:27.717152 2054169 cri.go:96] found id: ""
	I1222 01:19:27.717226 2054169 logs.go:282] 0 containers: []
	W1222 01:19:27.717248 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:27.717269 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:27.717377 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:27.749630 2054169 cri.go:96] found id: ""
	I1222 01:19:27.749666 2054169 logs.go:282] 0 containers: []
	W1222 01:19:27.749675 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:27.749682 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:27.749747 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:27.786546 2054169 cri.go:96] found id: ""
	I1222 01:19:27.786580 2054169 logs.go:282] 0 containers: []
	W1222 01:19:27.786589 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:27.786598 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:27.786611 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:27.863997 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:27.864026 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:27.864039 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:27.916006 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:27.916080 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:27.970265 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:27.970351 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:28.050751 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:28.050789 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:30.573456 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:30.583700 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:30.583775 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:30.623834 2054169 cri.go:96] found id: ""
	I1222 01:19:30.623856 2054169 logs.go:282] 0 containers: []
	W1222 01:19:30.623864 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:30.623870 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:30.623940 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:30.659489 2054169 cri.go:96] found id: ""
	I1222 01:19:30.659510 2054169 logs.go:282] 0 containers: []
	W1222 01:19:30.659518 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:30.659525 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:30.659582 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:30.693946 2054169 cri.go:96] found id: ""
	I1222 01:19:30.693969 2054169 logs.go:282] 0 containers: []
	W1222 01:19:30.693977 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:30.693984 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:30.694045 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:30.718950 2054169 cri.go:96] found id: ""
	I1222 01:19:30.718972 2054169 logs.go:282] 0 containers: []
	W1222 01:19:30.718981 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:30.718987 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:30.719047 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:30.744964 2054169 cri.go:96] found id: ""
	I1222 01:19:30.744986 2054169 logs.go:282] 0 containers: []
	W1222 01:19:30.745055 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:30.745067 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:30.745129 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:30.772328 2054169 cri.go:96] found id: ""
	I1222 01:19:30.772350 2054169 logs.go:282] 0 containers: []
	W1222 01:19:30.772358 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:30.772365 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:30.772439 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:30.798467 2054169 cri.go:96] found id: ""
	I1222 01:19:30.798489 2054169 logs.go:282] 0 containers: []
	W1222 01:19:30.798497 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:30.798505 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:30.798566 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:30.824047 2054169 cri.go:96] found id: ""
	I1222 01:19:30.824069 2054169 logs.go:282] 0 containers: []
	W1222 01:19:30.824078 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:30.824086 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:30.824098 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:30.855985 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:30.856018 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:30.884259 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:30.884288 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:30.952319 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:30.952354 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:30.968557 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:30.968589 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:31.032312 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:33.533065 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:33.543178 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:33.543251 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:33.569011 2054169 cri.go:96] found id: ""
	I1222 01:19:33.569037 2054169 logs.go:282] 0 containers: []
	W1222 01:19:33.569046 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:33.569052 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:33.569112 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:33.605188 2054169 cri.go:96] found id: ""
	I1222 01:19:33.605213 2054169 logs.go:282] 0 containers: []
	W1222 01:19:33.605223 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:33.605229 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:33.605290 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:33.639460 2054169 cri.go:96] found id: ""
	I1222 01:19:33.639485 2054169 logs.go:282] 0 containers: []
	W1222 01:19:33.639495 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:33.639501 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:33.639560 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:33.668865 2054169 cri.go:96] found id: ""
	I1222 01:19:33.668891 2054169 logs.go:282] 0 containers: []
	W1222 01:19:33.668900 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:33.668907 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:33.668968 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:33.696064 2054169 cri.go:96] found id: ""
	I1222 01:19:33.696090 2054169 logs.go:282] 0 containers: []
	W1222 01:19:33.696098 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:33.696105 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:33.696164 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:33.721381 2054169 cri.go:96] found id: ""
	I1222 01:19:33.721406 2054169 logs.go:282] 0 containers: []
	W1222 01:19:33.721415 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:33.721422 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:33.721489 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:33.746542 2054169 cri.go:96] found id: ""
	I1222 01:19:33.746565 2054169 logs.go:282] 0 containers: []
	W1222 01:19:33.746574 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:33.746580 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:33.746639 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:33.770776 2054169 cri.go:96] found id: ""
	I1222 01:19:33.770800 2054169 logs.go:282] 0 containers: []
	W1222 01:19:33.770809 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:33.770818 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:33.770830 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:33.802031 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:33.802067 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:33.830221 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:33.830249 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:33.899311 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:33.899349 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:33.916328 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:33.916353 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:33.976673 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:36.476948 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:36.489555 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:36.489626 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:36.538034 2054169 cri.go:96] found id: ""
	I1222 01:19:36.538056 2054169 logs.go:282] 0 containers: []
	W1222 01:19:36.538064 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:36.538071 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:36.538129 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:36.576931 2054169 cri.go:96] found id: ""
	I1222 01:19:36.576953 2054169 logs.go:282] 0 containers: []
	W1222 01:19:36.576961 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:36.576968 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:36.577030 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:36.631912 2054169 cri.go:96] found id: ""
	I1222 01:19:36.631942 2054169 logs.go:282] 0 containers: []
	W1222 01:19:36.631951 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:36.631957 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:36.632018 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:36.698958 2054169 cri.go:96] found id: ""
	I1222 01:19:36.698979 2054169 logs.go:282] 0 containers: []
	W1222 01:19:36.698988 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:36.699000 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:36.699059 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:36.734642 2054169 cri.go:96] found id: ""
	I1222 01:19:36.734663 2054169 logs.go:282] 0 containers: []
	W1222 01:19:36.734671 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:36.734677 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:36.734741 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:36.766255 2054169 cri.go:96] found id: ""
	I1222 01:19:36.766278 2054169 logs.go:282] 0 containers: []
	W1222 01:19:36.766286 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:36.766294 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:36.766354 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:36.795603 2054169 cri.go:96] found id: ""
	I1222 01:19:36.795624 2054169 logs.go:282] 0 containers: []
	W1222 01:19:36.795633 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:36.795639 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:36.795700 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:36.826631 2054169 cri.go:96] found id: ""
	I1222 01:19:36.826652 2054169 logs.go:282] 0 containers: []
	W1222 01:19:36.826660 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:36.826668 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:36.826680 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:36.909510 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:36.909588 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:36.926522 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:36.926549 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:37.021602 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:37.021674 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:37.021701 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:37.059549 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:37.059586 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:39.600692 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:39.612478 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:39.612553 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:39.650767 2054169 cri.go:96] found id: ""
	I1222 01:19:39.650791 2054169 logs.go:282] 0 containers: []
	W1222 01:19:39.650800 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:39.650812 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:39.650883 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:39.679472 2054169 cri.go:96] found id: ""
	I1222 01:19:39.679495 2054169 logs.go:282] 0 containers: []
	W1222 01:19:39.679503 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:39.679510 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:39.679574 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:39.704257 2054169 cri.go:96] found id: ""
	I1222 01:19:39.704283 2054169 logs.go:282] 0 containers: []
	W1222 01:19:39.704292 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:39.704299 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:39.704363 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:39.730435 2054169 cri.go:96] found id: ""
	I1222 01:19:39.730460 2054169 logs.go:282] 0 containers: []
	W1222 01:19:39.730479 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:39.730485 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:39.730543 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:39.757397 2054169 cri.go:96] found id: ""
	I1222 01:19:39.757422 2054169 logs.go:282] 0 containers: []
	W1222 01:19:39.757432 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:39.757438 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:39.757502 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:39.784335 2054169 cri.go:96] found id: ""
	I1222 01:19:39.784360 2054169 logs.go:282] 0 containers: []
	W1222 01:19:39.784369 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:39.784376 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:39.784438 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:39.817797 2054169 cri.go:96] found id: ""
	I1222 01:19:39.817820 2054169 logs.go:282] 0 containers: []
	W1222 01:19:39.817828 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:39.817835 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:39.817898 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:39.848088 2054169 cri.go:96] found id: ""
	I1222 01:19:39.848112 2054169 logs.go:282] 0 containers: []
	W1222 01:19:39.848121 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:39.848129 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:39.848141 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:39.870609 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:39.870637 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:39.943630 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:39.943648 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:39.943666 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:39.977032 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:39.977070 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:40.014118 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:40.014149 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:42.588439 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:42.614080 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:42.614156 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:42.666787 2054169 cri.go:96] found id: ""
	I1222 01:19:42.666808 2054169 logs.go:282] 0 containers: []
	W1222 01:19:42.666817 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:42.666823 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:42.666889 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:42.716837 2054169 cri.go:96] found id: ""
	I1222 01:19:42.716859 2054169 logs.go:282] 0 containers: []
	W1222 01:19:42.716867 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:42.716874 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:42.716939 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:42.751102 2054169 cri.go:96] found id: ""
	I1222 01:19:42.751125 2054169 logs.go:282] 0 containers: []
	W1222 01:19:42.751133 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:42.751140 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:42.751198 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:42.779396 2054169 cri.go:96] found id: ""
	I1222 01:19:42.779478 2054169 logs.go:282] 0 containers: []
	W1222 01:19:42.779501 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:42.779518 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:42.779630 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:42.815907 2054169 cri.go:96] found id: ""
	I1222 01:19:42.815929 2054169 logs.go:282] 0 containers: []
	W1222 01:19:42.815937 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:42.815943 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:42.816015 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:42.846336 2054169 cri.go:96] found id: ""
	I1222 01:19:42.846357 2054169 logs.go:282] 0 containers: []
	W1222 01:19:42.846366 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:42.846372 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:42.846438 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:42.876473 2054169 cri.go:96] found id: ""
	I1222 01:19:42.876495 2054169 logs.go:282] 0 containers: []
	W1222 01:19:42.876504 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:42.876510 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:42.876586 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:42.907722 2054169 cri.go:96] found id: ""
	I1222 01:19:42.907743 2054169 logs.go:282] 0 containers: []
	W1222 01:19:42.907751 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:42.907760 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:42.907771 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:42.924726 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:42.924798 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:43.008611 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:43.008634 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:43.008646 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:43.043578 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:43.043615 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:43.080183 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:43.080219 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:45.657783 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:45.668219 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:45.668299 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:45.693752 2054169 cri.go:96] found id: ""
	I1222 01:19:45.693777 2054169 logs.go:282] 0 containers: []
	W1222 01:19:45.693786 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:45.693793 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:45.693855 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:45.719884 2054169 cri.go:96] found id: ""
	I1222 01:19:45.719908 2054169 logs.go:282] 0 containers: []
	W1222 01:19:45.719917 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:45.719923 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:45.719984 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:45.747338 2054169 cri.go:96] found id: ""
	I1222 01:19:45.747366 2054169 logs.go:282] 0 containers: []
	W1222 01:19:45.747375 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:45.747381 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:45.747444 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:45.775819 2054169 cri.go:96] found id: ""
	I1222 01:19:45.775843 2054169 logs.go:282] 0 containers: []
	W1222 01:19:45.775852 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:45.775859 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:45.775918 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:45.805449 2054169 cri.go:96] found id: ""
	I1222 01:19:45.805470 2054169 logs.go:282] 0 containers: []
	W1222 01:19:45.805478 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:45.805484 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:45.805545 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:45.830390 2054169 cri.go:96] found id: ""
	I1222 01:19:45.830419 2054169 logs.go:282] 0 containers: []
	W1222 01:19:45.830428 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:45.830435 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:45.830499 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:45.856374 2054169 cri.go:96] found id: ""
	I1222 01:19:45.856396 2054169 logs.go:282] 0 containers: []
	W1222 01:19:45.856405 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:45.856417 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:45.856482 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:45.883824 2054169 cri.go:96] found id: ""
	I1222 01:19:45.883847 2054169 logs.go:282] 0 containers: []
	W1222 01:19:45.883855 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:45.883864 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:45.883875 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:45.950881 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:45.950915 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:45.967071 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:45.967138 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:46.030790 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:46.030809 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:46.030821 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:46.062248 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:46.062281 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:48.596855 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:48.608928 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:48.609005 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:48.639745 2054169 cri.go:96] found id: ""
	I1222 01:19:48.639767 2054169 logs.go:282] 0 containers: []
	W1222 01:19:48.639776 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:48.639782 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:48.639844 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:48.669665 2054169 cri.go:96] found id: ""
	I1222 01:19:48.669690 2054169 logs.go:282] 0 containers: []
	W1222 01:19:48.669699 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:48.669706 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:48.669763 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:48.695743 2054169 cri.go:96] found id: ""
	I1222 01:19:48.695769 2054169 logs.go:282] 0 containers: []
	W1222 01:19:48.695779 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:48.695785 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:48.695850 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:48.721234 2054169 cri.go:96] found id: ""
	I1222 01:19:48.721307 2054169 logs.go:282] 0 containers: []
	W1222 01:19:48.721360 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:48.721375 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:48.721446 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:48.747326 2054169 cri.go:96] found id: ""
	I1222 01:19:48.747352 2054169 logs.go:282] 0 containers: []
	W1222 01:19:48.747362 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:48.747368 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:48.747428 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:48.776063 2054169 cri.go:96] found id: ""
	I1222 01:19:48.776086 2054169 logs.go:282] 0 containers: []
	W1222 01:19:48.776095 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:48.776102 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:48.776168 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:48.801993 2054169 cri.go:96] found id: ""
	I1222 01:19:48.802018 2054169 logs.go:282] 0 containers: []
	W1222 01:19:48.802027 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:48.802034 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:48.802097 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:48.826887 2054169 cri.go:96] found id: ""
	I1222 01:19:48.826910 2054169 logs.go:282] 0 containers: []
	W1222 01:19:48.826919 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:48.826928 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:48.826940 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:48.898011 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:48.898048 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:48.914782 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:48.914810 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:48.979568 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:48.979631 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:48.979650 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:49.009978 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:49.010012 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:51.539204 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:51.549658 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:51.549731 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:51.574410 2054169 cri.go:96] found id: ""
	I1222 01:19:51.574435 2054169 logs.go:282] 0 containers: []
	W1222 01:19:51.574444 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:51.574454 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:51.574514 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:51.610730 2054169 cri.go:96] found id: ""
	I1222 01:19:51.610756 2054169 logs.go:282] 0 containers: []
	W1222 01:19:51.610765 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:51.610771 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:51.610838 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:51.643937 2054169 cri.go:96] found id: ""
	I1222 01:19:51.643971 2054169 logs.go:282] 0 containers: []
	W1222 01:19:51.643980 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:51.643986 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:51.644055 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:51.668613 2054169 cri.go:96] found id: ""
	I1222 01:19:51.668650 2054169 logs.go:282] 0 containers: []
	W1222 01:19:51.668660 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:51.668666 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:51.668747 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:51.696537 2054169 cri.go:96] found id: ""
	I1222 01:19:51.696566 2054169 logs.go:282] 0 containers: []
	W1222 01:19:51.696574 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:51.696580 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:51.696649 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:51.723532 2054169 cri.go:96] found id: ""
	I1222 01:19:51.723608 2054169 logs.go:282] 0 containers: []
	W1222 01:19:51.723629 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:51.723643 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:51.723716 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:51.749386 2054169 cri.go:96] found id: ""
	I1222 01:19:51.749423 2054169 logs.go:282] 0 containers: []
	W1222 01:19:51.749468 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:51.749475 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:51.749552 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:51.774605 2054169 cri.go:96] found id: ""
	I1222 01:19:51.774635 2054169 logs.go:282] 0 containers: []
	W1222 01:19:51.774644 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:51.774653 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:51.774694 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:51.806366 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:51.806404 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:51.839865 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:51.839891 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:51.907630 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:51.907670 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:51.924283 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:51.924311 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:51.990031 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:54.490682 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:54.500660 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:54.500753 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:54.528742 2054169 cri.go:96] found id: ""
	I1222 01:19:54.528765 2054169 logs.go:282] 0 containers: []
	W1222 01:19:54.528774 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:54.528780 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:54.528840 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:54.555205 2054169 cri.go:96] found id: ""
	I1222 01:19:54.555233 2054169 logs.go:282] 0 containers: []
	W1222 01:19:54.555241 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:54.555249 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:54.555309 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:54.583435 2054169 cri.go:96] found id: ""
	I1222 01:19:54.583462 2054169 logs.go:282] 0 containers: []
	W1222 01:19:54.583471 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:54.583477 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:54.583567 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:54.622273 2054169 cri.go:96] found id: ""
	I1222 01:19:54.622297 2054169 logs.go:282] 0 containers: []
	W1222 01:19:54.622305 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:54.622312 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:54.622373 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:54.662813 2054169 cri.go:96] found id: ""
	I1222 01:19:54.662838 2054169 logs.go:282] 0 containers: []
	W1222 01:19:54.662847 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:54.662853 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:54.662919 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:54.694024 2054169 cri.go:96] found id: ""
	I1222 01:19:54.694103 2054169 logs.go:282] 0 containers: []
	W1222 01:19:54.694124 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:54.694139 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:54.694220 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:54.723870 2054169 cri.go:96] found id: ""
	I1222 01:19:54.723897 2054169 logs.go:282] 0 containers: []
	W1222 01:19:54.723906 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:54.723913 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:54.723975 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:54.752410 2054169 cri.go:96] found id: ""
	I1222 01:19:54.752433 2054169 logs.go:282] 0 containers: []
	W1222 01:19:54.752441 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:54.752450 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:54.752462 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:54.812718 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:19:54.812740 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:54.812752 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:54.843311 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:54.843343 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:54.871272 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:54.871300 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:54.946328 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:54.946374 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:57.465466 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:19:57.475978 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:19:57.476055 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:19:57.503681 2054169 cri.go:96] found id: ""
	I1222 01:19:57.503709 2054169 logs.go:282] 0 containers: []
	W1222 01:19:57.503718 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:19:57.503725 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:19:57.503793 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:19:57.529771 2054169 cri.go:96] found id: ""
	I1222 01:19:57.529793 2054169 logs.go:282] 0 containers: []
	W1222 01:19:57.529803 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:19:57.529809 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:19:57.529870 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:19:57.555698 2054169 cri.go:96] found id: ""
	I1222 01:19:57.555721 2054169 logs.go:282] 0 containers: []
	W1222 01:19:57.555732 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:19:57.555739 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:19:57.555800 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:19:57.581526 2054169 cri.go:96] found id: ""
	I1222 01:19:57.581547 2054169 logs.go:282] 0 containers: []
	W1222 01:19:57.581556 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:19:57.581562 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:19:57.581626 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:19:57.620094 2054169 cri.go:96] found id: ""
	I1222 01:19:57.620117 2054169 logs.go:282] 0 containers: []
	W1222 01:19:57.620126 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:19:57.620132 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:19:57.620192 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:19:57.652128 2054169 cri.go:96] found id: ""
	I1222 01:19:57.652150 2054169 logs.go:282] 0 containers: []
	W1222 01:19:57.652159 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:19:57.652166 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:19:57.652229 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:19:57.680306 2054169 cri.go:96] found id: ""
	I1222 01:19:57.680328 2054169 logs.go:282] 0 containers: []
	W1222 01:19:57.680336 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:19:57.680343 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:19:57.680402 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:19:57.708077 2054169 cri.go:96] found id: ""
	I1222 01:19:57.708143 2054169 logs.go:282] 0 containers: []
	W1222 01:19:57.708165 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:19:57.708187 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:19:57.708213 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:19:57.739425 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:19:57.739460 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:19:57.769092 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:19:57.769166 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:19:57.837426 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:19:57.837459 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:19:57.854858 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:19:57.854887 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:19:57.917869 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:00.424901 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:00.447500 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:00.447582 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:00.496813 2054169 cri.go:96] found id: ""
	I1222 01:20:00.496842 2054169 logs.go:282] 0 containers: []
	W1222 01:20:00.496857 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:00.496865 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:00.496945 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:00.535208 2054169 cri.go:96] found id: ""
	I1222 01:20:00.535238 2054169 logs.go:282] 0 containers: []
	W1222 01:20:00.535248 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:00.535254 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:00.535327 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:00.570489 2054169 cri.go:96] found id: ""
	I1222 01:20:00.570518 2054169 logs.go:282] 0 containers: []
	W1222 01:20:00.570527 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:00.570534 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:00.570615 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:00.606843 2054169 cri.go:96] found id: ""
	I1222 01:20:00.606872 2054169 logs.go:282] 0 containers: []
	W1222 01:20:00.606882 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:00.606889 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:00.606958 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:00.640574 2054169 cri.go:96] found id: ""
	I1222 01:20:00.640600 2054169 logs.go:282] 0 containers: []
	W1222 01:20:00.640609 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:00.640616 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:00.640691 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:00.676351 2054169 cri.go:96] found id: ""
	I1222 01:20:00.676429 2054169 logs.go:282] 0 containers: []
	W1222 01:20:00.676452 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:00.676472 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:00.676601 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:00.704069 2054169 cri.go:96] found id: ""
	I1222 01:20:00.704146 2054169 logs.go:282] 0 containers: []
	W1222 01:20:00.704168 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:00.704189 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:00.704282 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:00.732880 2054169 cri.go:96] found id: ""
	I1222 01:20:00.732958 2054169 logs.go:282] 0 containers: []
	W1222 01:20:00.732982 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:00.733007 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:00.733048 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:00.804966 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:00.805002 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:00.821821 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:00.821852 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:00.889459 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:00.889481 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:00.889501 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:00.921849 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:00.921883 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:03.452051 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:03.468838 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:03.468924 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:03.501243 2054169 cri.go:96] found id: ""
	I1222 01:20:03.501343 2054169 logs.go:282] 0 containers: []
	W1222 01:20:03.501368 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:03.501396 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:03.501466 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:03.528796 2054169 cri.go:96] found id: ""
	I1222 01:20:03.528820 2054169 logs.go:282] 0 containers: []
	W1222 01:20:03.528829 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:03.528835 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:03.528901 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:03.560133 2054169 cri.go:96] found id: ""
	I1222 01:20:03.560207 2054169 logs.go:282] 0 containers: []
	W1222 01:20:03.560244 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:03.560270 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:03.560363 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:03.587644 2054169 cri.go:96] found id: ""
	I1222 01:20:03.587669 2054169 logs.go:282] 0 containers: []
	W1222 01:20:03.587678 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:03.587684 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:03.587747 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:03.615760 2054169 cri.go:96] found id: ""
	I1222 01:20:03.615837 2054169 logs.go:282] 0 containers: []
	W1222 01:20:03.615862 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:03.615882 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:03.615994 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:03.654745 2054169 cri.go:96] found id: ""
	I1222 01:20:03.654772 2054169 logs.go:282] 0 containers: []
	W1222 01:20:03.654780 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:03.654787 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:03.654849 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:03.682754 2054169 cri.go:96] found id: ""
	I1222 01:20:03.682778 2054169 logs.go:282] 0 containers: []
	W1222 01:20:03.682787 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:03.682793 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:03.682853 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:03.709509 2054169 cri.go:96] found id: ""
	I1222 01:20:03.709588 2054169 logs.go:282] 0 containers: []
	W1222 01:20:03.709621 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:03.709643 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:03.709682 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:03.741385 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:03.741422 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:03.770836 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:03.770864 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:03.839114 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:03.839147 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:03.856909 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:03.856937 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:03.924765 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:06.425469 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:06.435641 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:06.435708 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:06.468244 2054169 cri.go:96] found id: ""
	I1222 01:20:06.468266 2054169 logs.go:282] 0 containers: []
	W1222 01:20:06.468275 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:06.468281 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:06.468340 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:06.498880 2054169 cri.go:96] found id: ""
	I1222 01:20:06.498905 2054169 logs.go:282] 0 containers: []
	W1222 01:20:06.498914 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:06.498921 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:06.498985 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:06.525774 2054169 cri.go:96] found id: ""
	I1222 01:20:06.525797 2054169 logs.go:282] 0 containers: []
	W1222 01:20:06.525805 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:06.525811 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:06.525875 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:06.554748 2054169 cri.go:96] found id: ""
	I1222 01:20:06.554774 2054169 logs.go:282] 0 containers: []
	W1222 01:20:06.554782 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:06.554789 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:06.554851 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:06.586068 2054169 cri.go:96] found id: ""
	I1222 01:20:06.586094 2054169 logs.go:282] 0 containers: []
	W1222 01:20:06.586102 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:06.586109 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:06.586180 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:06.615251 2054169 cri.go:96] found id: ""
	I1222 01:20:06.615279 2054169 logs.go:282] 0 containers: []
	W1222 01:20:06.615288 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:06.615294 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:06.615359 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:06.647680 2054169 cri.go:96] found id: ""
	I1222 01:20:06.647707 2054169 logs.go:282] 0 containers: []
	W1222 01:20:06.647716 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:06.647752 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:06.647832 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:06.674959 2054169 cri.go:96] found id: ""
	I1222 01:20:06.674983 2054169 logs.go:282] 0 containers: []
	W1222 01:20:06.674992 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:06.675000 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:06.675012 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:06.743062 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:06.743099 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:06.760052 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:06.760079 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:06.829906 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:06.829927 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:06.829943 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:06.862163 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:06.862199 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:09.394806 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:09.404938 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:09.405016 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:09.431042 2054169 cri.go:96] found id: ""
	I1222 01:20:09.431066 2054169 logs.go:282] 0 containers: []
	W1222 01:20:09.431074 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:09.431081 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:09.431140 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:09.455788 2054169 cri.go:96] found id: ""
	I1222 01:20:09.455813 2054169 logs.go:282] 0 containers: []
	W1222 01:20:09.455822 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:09.455829 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:09.455889 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:09.486564 2054169 cri.go:96] found id: ""
	I1222 01:20:09.486589 2054169 logs.go:282] 0 containers: []
	W1222 01:20:09.486597 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:09.486604 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:09.486665 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:09.513945 2054169 cri.go:96] found id: ""
	I1222 01:20:09.513971 2054169 logs.go:282] 0 containers: []
	W1222 01:20:09.513980 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:09.513987 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:09.514046 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:09.544745 2054169 cri.go:96] found id: ""
	I1222 01:20:09.544767 2054169 logs.go:282] 0 containers: []
	W1222 01:20:09.544776 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:09.544782 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:09.544849 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:09.570515 2054169 cri.go:96] found id: ""
	I1222 01:20:09.570540 2054169 logs.go:282] 0 containers: []
	W1222 01:20:09.570549 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:09.570556 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:09.570616 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:09.603874 2054169 cri.go:96] found id: ""
	I1222 01:20:09.603900 2054169 logs.go:282] 0 containers: []
	W1222 01:20:09.603909 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:09.603915 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:09.603975 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:09.635119 2054169 cri.go:96] found id: ""
	I1222 01:20:09.635141 2054169 logs.go:282] 0 containers: []
	W1222 01:20:09.635150 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:09.635158 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:09.635171 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:09.709563 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:09.709600 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:09.728062 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:09.728092 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:09.793897 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:09.793923 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:09.793936 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:09.825552 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:09.825582 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:12.355075 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:12.368152 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:12.368220 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:12.396632 2054169 cri.go:96] found id: ""
	I1222 01:20:12.396655 2054169 logs.go:282] 0 containers: []
	W1222 01:20:12.396663 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:12.396669 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:12.396734 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:12.424419 2054169 cri.go:96] found id: ""
	I1222 01:20:12.424441 2054169 logs.go:282] 0 containers: []
	W1222 01:20:12.424450 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:12.424456 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:12.424516 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:12.454407 2054169 cri.go:96] found id: ""
	I1222 01:20:12.454431 2054169 logs.go:282] 0 containers: []
	W1222 01:20:12.454440 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:12.454447 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:12.454510 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:12.480847 2054169 cri.go:96] found id: ""
	I1222 01:20:12.480871 2054169 logs.go:282] 0 containers: []
	W1222 01:20:12.480880 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:12.480886 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:12.480944 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:12.506569 2054169 cri.go:96] found id: ""
	I1222 01:20:12.506597 2054169 logs.go:282] 0 containers: []
	W1222 01:20:12.506606 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:12.506613 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:12.506674 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:12.532990 2054169 cri.go:96] found id: ""
	I1222 01:20:12.533014 2054169 logs.go:282] 0 containers: []
	W1222 01:20:12.533023 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:12.533029 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:12.533090 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:12.559135 2054169 cri.go:96] found id: ""
	I1222 01:20:12.559158 2054169 logs.go:282] 0 containers: []
	W1222 01:20:12.559168 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:12.559174 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:12.559233 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:12.583742 2054169 cri.go:96] found id: ""
	I1222 01:20:12.583767 2054169 logs.go:282] 0 containers: []
	W1222 01:20:12.583776 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:12.583785 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:12.583796 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:12.655718 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:12.655753 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:12.673353 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:12.673383 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:12.732811 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:12.732829 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:12.732842 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:12.764028 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:12.764061 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:15.296178 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:15.306381 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:15.306450 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:15.347575 2054169 cri.go:96] found id: ""
	I1222 01:20:15.347596 2054169 logs.go:282] 0 containers: []
	W1222 01:20:15.347604 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:15.347610 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:15.347668 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:15.383233 2054169 cri.go:96] found id: ""
	I1222 01:20:15.383255 2054169 logs.go:282] 0 containers: []
	W1222 01:20:15.383264 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:15.383271 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:15.383328 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:15.416611 2054169 cri.go:96] found id: ""
	I1222 01:20:15.416633 2054169 logs.go:282] 0 containers: []
	W1222 01:20:15.416642 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:15.416648 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:15.416709 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:15.443311 2054169 cri.go:96] found id: ""
	I1222 01:20:15.443327 2054169 logs.go:282] 0 containers: []
	W1222 01:20:15.443335 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:15.443344 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:15.443406 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:15.480332 2054169 cri.go:96] found id: ""
	I1222 01:20:15.480422 2054169 logs.go:282] 0 containers: []
	W1222 01:20:15.480445 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:15.480466 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:15.480569 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:15.516714 2054169 cri.go:96] found id: ""
	I1222 01:20:15.516788 2054169 logs.go:282] 0 containers: []
	W1222 01:20:15.516811 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:15.516832 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:15.516924 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:15.547656 2054169 cri.go:96] found id: ""
	I1222 01:20:15.547726 2054169 logs.go:282] 0 containers: []
	W1222 01:20:15.547749 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:15.547769 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:15.547861 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:15.579484 2054169 cri.go:96] found id: ""
	I1222 01:20:15.579557 2054169 logs.go:282] 0 containers: []
	W1222 01:20:15.579580 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:15.579600 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:15.579650 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:15.656926 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:15.657884 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:15.748335 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:15.748409 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:15.828936 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:15.828970 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:15.848253 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:15.848281 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:15.917191 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:18.418052 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:18.429671 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:18.429750 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:18.474710 2054169 cri.go:96] found id: ""
	I1222 01:20:18.474731 2054169 logs.go:282] 0 containers: []
	W1222 01:20:18.474739 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:18.474746 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:18.474812 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:18.511376 2054169 cri.go:96] found id: ""
	I1222 01:20:18.511398 2054169 logs.go:282] 0 containers: []
	W1222 01:20:18.511406 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:18.511413 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:18.511473 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:18.548178 2054169 cri.go:96] found id: ""
	I1222 01:20:18.548200 2054169 logs.go:282] 0 containers: []
	W1222 01:20:18.548208 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:18.548214 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:18.548273 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:18.584587 2054169 cri.go:96] found id: ""
	I1222 01:20:18.584608 2054169 logs.go:282] 0 containers: []
	W1222 01:20:18.584617 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:18.584623 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:18.584682 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:18.666838 2054169 cri.go:96] found id: ""
	I1222 01:20:18.666860 2054169 logs.go:282] 0 containers: []
	W1222 01:20:18.666868 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:18.666875 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:18.667003 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:18.702165 2054169 cri.go:96] found id: ""
	I1222 01:20:18.702239 2054169 logs.go:282] 0 containers: []
	W1222 01:20:18.702262 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:18.702282 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:18.702375 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:18.740394 2054169 cri.go:96] found id: ""
	I1222 01:20:18.740465 2054169 logs.go:282] 0 containers: []
	W1222 01:20:18.740488 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:18.740507 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:18.740610 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:18.773780 2054169 cri.go:96] found id: ""
	I1222 01:20:18.773856 2054169 logs.go:282] 0 containers: []
	W1222 01:20:18.773878 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:18.773903 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:18.773942 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:18.848374 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:18.848448 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:18.865490 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:18.865535 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:18.953305 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:18.953359 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:18.953375 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:18.987828 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:18.987917 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:21.527454 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:21.537732 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:21.537801 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:21.563444 2054169 cri.go:96] found id: ""
	I1222 01:20:21.563470 2054169 logs.go:282] 0 containers: []
	W1222 01:20:21.563479 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:21.563485 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:21.563547 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:21.595264 2054169 cri.go:96] found id: ""
	I1222 01:20:21.595291 2054169 logs.go:282] 0 containers: []
	W1222 01:20:21.595300 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:21.595306 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:21.595371 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:21.624192 2054169 cri.go:96] found id: ""
	I1222 01:20:21.624213 2054169 logs.go:282] 0 containers: []
	W1222 01:20:21.624221 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:21.624228 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:21.624288 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:21.659291 2054169 cri.go:96] found id: ""
	I1222 01:20:21.659314 2054169 logs.go:282] 0 containers: []
	W1222 01:20:21.659323 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:21.659329 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:21.659392 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:21.684670 2054169 cri.go:96] found id: ""
	I1222 01:20:21.684692 2054169 logs.go:282] 0 containers: []
	W1222 01:20:21.684700 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:21.684706 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:21.684768 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:21.712572 2054169 cri.go:96] found id: ""
	I1222 01:20:21.712595 2054169 logs.go:282] 0 containers: []
	W1222 01:20:21.712603 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:21.712610 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:21.712672 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:21.738407 2054169 cri.go:96] found id: ""
	I1222 01:20:21.738436 2054169 logs.go:282] 0 containers: []
	W1222 01:20:21.738444 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:21.738451 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:21.738511 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:21.764779 2054169 cri.go:96] found id: ""
	I1222 01:20:21.764802 2054169 logs.go:282] 0 containers: []
	W1222 01:20:21.764811 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:21.764820 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:21.764831 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:21.833563 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:21.833636 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:21.852869 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:21.852896 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:21.940473 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:21.940552 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:21.940579 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:21.976924 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:21.976959 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:24.521091 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:24.531240 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:20:24.531312 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:20:24.557763 2054169 cri.go:96] found id: ""
	I1222 01:20:24.557791 2054169 logs.go:282] 0 containers: []
	W1222 01:20:24.557800 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:20:24.557806 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:20:24.557880 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:20:24.583459 2054169 cri.go:96] found id: ""
	I1222 01:20:24.583482 2054169 logs.go:282] 0 containers: []
	W1222 01:20:24.583491 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:20:24.583497 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:20:24.583561 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:20:24.622888 2054169 cri.go:96] found id: ""
	I1222 01:20:24.622914 2054169 logs.go:282] 0 containers: []
	W1222 01:20:24.622922 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:20:24.622929 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:20:24.622990 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:20:24.652327 2054169 cri.go:96] found id: ""
	I1222 01:20:24.652352 2054169 logs.go:282] 0 containers: []
	W1222 01:20:24.652361 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:20:24.652367 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:20:24.652426 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:20:24.680373 2054169 cri.go:96] found id: ""
	I1222 01:20:24.680398 2054169 logs.go:282] 0 containers: []
	W1222 01:20:24.680407 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:20:24.680414 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:20:24.680480 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:20:24.705303 2054169 cri.go:96] found id: ""
	I1222 01:20:24.705346 2054169 logs.go:282] 0 containers: []
	W1222 01:20:24.705355 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:20:24.705361 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:20:24.705421 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:20:24.736190 2054169 cri.go:96] found id: ""
	I1222 01:20:24.736214 2054169 logs.go:282] 0 containers: []
	W1222 01:20:24.736223 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:20:24.736230 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:20:24.736288 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:20:24.761678 2054169 cri.go:96] found id: ""
	I1222 01:20:24.761703 2054169 logs.go:282] 0 containers: []
	W1222 01:20:24.761712 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:20:24.761720 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:20:24.761732 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:20:24.777146 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:20:24.777174 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:20:24.837874 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:20:24.837942 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:20:24.837972 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:20:24.868330 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:20:24.868362 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:20:24.896164 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:20:24.896195 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:20:27.465694 2054169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:27.477173 2054169 kubeadm.go:602] duration metric: took 4m2.006925958s to restartPrimaryControlPlane
	W1222 01:20:27.477243 2054169 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 01:20:27.477306 2054169 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 01:20:27.886723 2054169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:20:27.899749 2054169 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:20:27.907724 2054169 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:20:27.907791 2054169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:20:27.915859 2054169 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:20:27.915878 2054169 kubeadm.go:158] found existing configuration files:
	
	I1222 01:20:27.915938 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:20:27.923952 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:20:27.924017 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:20:27.931603 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:20:27.939410 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:20:27.939477 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:20:27.947320 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:20:27.955840 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:20:27.955925 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:20:27.963517 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:20:27.971473 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:20:27.971542 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:20:27.979122 2054169 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:20:28.020152 2054169 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:20:28.020379 2054169 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:20:28.101372 2054169 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:20:28.101451 2054169 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:20:28.101491 2054169 kubeadm.go:319] OS: Linux
	I1222 01:20:28.101543 2054169 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:20:28.101596 2054169 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:20:28.101645 2054169 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:20:28.101706 2054169 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:20:28.101758 2054169 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:20:28.101809 2054169 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:20:28.101858 2054169 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:20:28.101913 2054169 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:20:28.101963 2054169 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:20:28.164140 2054169 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:20:28.164258 2054169 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:20:28.164354 2054169 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:20:28.173709 2054169 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:20:28.176698 2054169 out.go:252]   - Generating certificates and keys ...
	I1222 01:20:28.176793 2054169 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:20:28.176865 2054169 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:20:28.176948 2054169 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:20:28.177013 2054169 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:20:28.177086 2054169 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:20:28.177144 2054169 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:20:28.177211 2054169 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:20:28.177277 2054169 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:20:28.177399 2054169 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:20:28.177477 2054169 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:20:28.177518 2054169 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:20:28.177577 2054169 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:20:28.273898 2054169 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:20:28.811275 2054169 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:20:28.991852 2054169 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:20:29.145750 2054169 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:20:29.248045 2054169 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:20:29.252402 2054169 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:20:29.252520 2054169 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:20:29.255952 2054169 out.go:252]   - Booting up control plane ...
	I1222 01:20:29.256059 2054169 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:20:29.256152 2054169 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:20:29.257316 2054169 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:20:29.282344 2054169 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:20:29.282474 2054169 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:20:29.296332 2054169 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:20:29.296453 2054169 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:20:29.296498 2054169 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:20:29.474524 2054169 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:20:29.474658 2054169 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:24:29.475045 2054169 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000885588s
	I1222 01:24:29.475081 2054169 kubeadm.go:319] 
	I1222 01:24:29.475139 2054169 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:24:29.475175 2054169 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:24:29.475313 2054169 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:24:29.475341 2054169 kubeadm.go:319] 
	I1222 01:24:29.475457 2054169 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:24:29.475490 2054169 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:24:29.475534 2054169 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:24:29.475548 2054169 kubeadm.go:319] 
	I1222 01:24:29.479804 2054169 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:24:29.480229 2054169 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:24:29.480344 2054169 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:24:29.480589 2054169 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:24:29.480596 2054169 kubeadm.go:319] 
	I1222 01:24:29.480665 2054169 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:24:29.480774 2054169 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000885588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000885588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:24:29.480856 2054169 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 01:24:29.910017 2054169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:24:29.922467 2054169 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:24:29.922532 2054169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:24:29.930385 2054169 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:24:29.930406 2054169 kubeadm.go:158] found existing configuration files:
	
	I1222 01:24:29.930472 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:24:29.938077 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:24:29.938146 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:24:29.945541 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:24:29.953366 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:24:29.953431 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:24:29.961469 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:24:29.969544 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:24:29.969616 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:24:29.977246 2054169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:24:29.985396 2054169 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:24:29.985457 2054169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:24:29.992610 2054169 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:24:30.124146 2054169 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:24:30.124589 2054169 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:24:30.192444 2054169 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:28:32.069299 2054169 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:28:32.069350 2054169 kubeadm.go:319] 
	I1222 01:28:32.069422 2054169 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:28:32.073363 2054169 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:28:32.073428 2054169 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:28:32.073513 2054169 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:28:32.073569 2054169 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:28:32.073605 2054169 kubeadm.go:319] OS: Linux
	I1222 01:28:32.073651 2054169 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:28:32.073700 2054169 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:28:32.073749 2054169 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:28:32.073794 2054169 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:28:32.073839 2054169 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:28:32.073885 2054169 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:28:32.073927 2054169 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:28:32.073973 2054169 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:28:32.074016 2054169 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:28:32.074084 2054169 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:28:32.074173 2054169 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:28:32.074258 2054169 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:28:32.074316 2054169 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:28:32.077697 2054169 out.go:252]   - Generating certificates and keys ...
	I1222 01:28:32.077796 2054169 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:28:32.077888 2054169 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:28:32.077998 2054169 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:28:32.078083 2054169 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:28:32.078185 2054169 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:28:32.078256 2054169 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:28:32.078322 2054169 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:28:32.078394 2054169 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:28:32.078466 2054169 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:28:32.078547 2054169 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:28:32.078592 2054169 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:28:32.078680 2054169 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:28:32.078758 2054169 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:28:32.078822 2054169 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:28:32.078900 2054169 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:28:32.078972 2054169 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:28:32.079040 2054169 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:28:32.079135 2054169 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:28:32.079208 2054169 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:28:32.084257 2054169 out.go:252]   - Booting up control plane ...
	I1222 01:28:32.084370 2054169 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:28:32.084483 2054169 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:28:32.084568 2054169 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:28:32.084679 2054169 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:28:32.084773 2054169 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:28:32.084877 2054169 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:28:32.084961 2054169 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:28:32.085015 2054169 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:28:32.085204 2054169 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:28:32.085317 2054169 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:28:32.085450 2054169 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001156528s
	I1222 01:28:32.085461 2054169 kubeadm.go:319] 
	I1222 01:28:32.085515 2054169 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:28:32.085556 2054169 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:28:32.085661 2054169 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:28:32.085671 2054169 kubeadm.go:319] 
	I1222 01:28:32.085808 2054169 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:28:32.085852 2054169 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:28:32.085883 2054169 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:28:32.085908 2054169 kubeadm.go:319] 
	I1222 01:28:32.085953 2054169 kubeadm.go:403] duration metric: took 12m6.653551537s to StartCluster
	I1222 01:28:32.086001 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:28:32.086066 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:28:32.116188 2054169 cri.go:96] found id: ""
	I1222 01:28:32.116220 2054169 logs.go:282] 0 containers: []
	W1222 01:28:32.116230 2054169 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:28:32.116237 2054169 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:28:32.116312 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:28:32.152247 2054169 cri.go:96] found id: ""
	I1222 01:28:32.152276 2054169 logs.go:282] 0 containers: []
	W1222 01:28:32.152284 2054169 logs.go:284] No container was found matching "etcd"
	I1222 01:28:32.152291 2054169 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:28:32.152352 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:28:32.178905 2054169 cri.go:96] found id: ""
	I1222 01:28:32.178930 2054169 logs.go:282] 0 containers: []
	W1222 01:28:32.178938 2054169 logs.go:284] No container was found matching "coredns"
	I1222 01:28:32.178945 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:28:32.179005 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:28:32.204274 2054169 cri.go:96] found id: ""
	I1222 01:28:32.204297 2054169 logs.go:282] 0 containers: []
	W1222 01:28:32.204306 2054169 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:28:32.204312 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:28:32.204389 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:28:32.229346 2054169 cri.go:96] found id: ""
	I1222 01:28:32.229373 2054169 logs.go:282] 0 containers: []
	W1222 01:28:32.229381 2054169 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:28:32.229387 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:28:32.229451 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:28:32.255530 2054169 cri.go:96] found id: ""
	I1222 01:28:32.255555 2054169 logs.go:282] 0 containers: []
	W1222 01:28:32.255564 2054169 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:28:32.255571 2054169 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:28:32.255630 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:28:32.280855 2054169 cri.go:96] found id: ""
	I1222 01:28:32.280878 2054169 logs.go:282] 0 containers: []
	W1222 01:28:32.280886 2054169 logs.go:284] No container was found matching "kindnet"
	I1222 01:28:32.280893 2054169 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:28:32.280954 2054169 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:28:32.305898 2054169 cri.go:96] found id: ""
	I1222 01:28:32.305923 2054169 logs.go:282] 0 containers: []
	W1222 01:28:32.305931 2054169 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:28:32.305941 2054169 logs.go:123] Gathering logs for container status ...
	I1222 01:28:32.305953 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:28:32.334602 2054169 logs.go:123] Gathering logs for kubelet ...
	I1222 01:28:32.334628 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:28:32.403304 2054169 logs.go:123] Gathering logs for dmesg ...
	I1222 01:28:32.403337 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:28:32.420558 2054169 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:28:32.420585 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:28:32.488288 2054169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:28:32.488320 2054169 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:28:32.488333 2054169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1222 01:28:32.521401 2054169 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001156528s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:28:32.521464 2054169 out.go:285] * 
	* 
	W1222 01:28:32.521521 2054169 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001156528s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001156528s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:28:32.521540 2054169 out.go:285] * 
	* 
	W1222 01:28:32.523711 2054169 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:28:32.529769 2054169 out.go:203] 
	W1222 01:28:32.533545 2054169 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001156528s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001156528s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:28:32.533592 2054169 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:28:32.533615 2054169 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:28:32.537275 2054169 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-951043 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-951043 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-951043 version --output=json: exit status 1 (90.220654ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-22 01:28:33.507895595 +0000 UTC m=+4763.095519111
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-951043
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-951043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19666f3233d673d465fdc0963d41263bea1d63988a5b197ad5e30193fad8d850",
	        "Created": "2025-12-22T01:15:34.403452501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2054340,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:16:13.568283921Z",
	            "FinishedAt": "2025-12-22T01:16:12.262167323Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/19666f3233d673d465fdc0963d41263bea1d63988a5b197ad5e30193fad8d850/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19666f3233d673d465fdc0963d41263bea1d63988a5b197ad5e30193fad8d850/hostname",
	        "HostsPath": "/var/lib/docker/containers/19666f3233d673d465fdc0963d41263bea1d63988a5b197ad5e30193fad8d850/hosts",
	        "LogPath": "/var/lib/docker/containers/19666f3233d673d465fdc0963d41263bea1d63988a5b197ad5e30193fad8d850/19666f3233d673d465fdc0963d41263bea1d63988a5b197ad5e30193fad8d850-json.log",
	        "Name": "/kubernetes-upgrade-951043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-951043:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-951043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "19666f3233d673d465fdc0963d41263bea1d63988a5b197ad5e30193fad8d850",
	                "LowerDir": "/var/lib/docker/overlay2/915ddcb36861b060a5144c55131ab54ed7486ab438ff751ab0733d43bef97770-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/915ddcb36861b060a5144c55131ab54ed7486ab438ff751ab0733d43bef97770/merged",
	                "UpperDir": "/var/lib/docker/overlay2/915ddcb36861b060a5144c55131ab54ed7486ab438ff751ab0733d43bef97770/diff",
	                "WorkDir": "/var/lib/docker/overlay2/915ddcb36861b060a5144c55131ab54ed7486ab438ff751ab0733d43bef97770/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-951043",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-951043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-951043",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-951043",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-951043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7fa78d6a1634d87618ce70fd40233d2c03b5a81e52365a79ed00ebb72278fbf8",
	            "SandboxKey": "/var/run/docker/netns/7fa78d6a1634",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38681"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38682"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38685"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38683"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38684"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-951043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:64:b4:57:08:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dbb499f6e37f523c3f481a64e63e1e7d3b08c159dbe791b6c9f9a98bb126203d",
	                    "EndpointID": "ecb6af0b1bd8cff11de841da5d5fc3ef03bf61525807db23c8cc87edb373c457",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-951043",
	                        "19666f3233d6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-951043 -n kubernetes-upgrade-951043
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-951043 -n kubernetes-upgrade-951043: exit status 2 (471.904896ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-951043 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p missing-upgrade-158754 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ missing-upgrade-158754    │ jenkins │ v1.37.0 │ 22 Dec 25 01:14 UTC │ 22 Dec 25 01:15 UTC │
	│ start   │ -p pause-583340 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                              │ pause-583340              │ jenkins │ v1.37.0 │ 22 Dec 25 01:14 UTC │ 22 Dec 25 01:15 UTC │
	│ pause   │ -p pause-583340 --alsologtostderr -v=5                                                                                                        │ pause-583340              │ jenkins │ v1.37.0 │ 22 Dec 25 01:15 UTC │                     │
	│ delete  │ -p pause-583340                                                                                                                               │ pause-583340              │ jenkins │ v1.37.0 │ 22 Dec 25 01:15 UTC │ 22 Dec 25 01:15 UTC │
	│ start   │ -p kubernetes-upgrade-951043 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio      │ kubernetes-upgrade-951043 │ jenkins │ v1.37.0 │ 22 Dec 25 01:15 UTC │ 22 Dec 25 01:16 UTC │
	│ delete  │ -p missing-upgrade-158754                                                                                                                     │ missing-upgrade-158754    │ jenkins │ v1.37.0 │ 22 Dec 25 01:15 UTC │ 22 Dec 25 01:15 UTC │
	│ start   │ -p stopped-upgrade-608183 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ stopped-upgrade-608183    │ jenkins │ v1.35.0 │ 22 Dec 25 01:15 UTC │ 22 Dec 25 01:16 UTC │
	│ stop    │ -p kubernetes-upgrade-951043                                                                                                                  │ kubernetes-upgrade-951043 │ jenkins │ v1.37.0 │ 22 Dec 25 01:16 UTC │ 22 Dec 25 01:16 UTC │
	│ start   │ -p kubernetes-upgrade-951043 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-951043 │ jenkins │ v1.37.0 │ 22 Dec 25 01:16 UTC │                     │
	│ stop    │ stopped-upgrade-608183 stop                                                                                                                   │ stopped-upgrade-608183    │ jenkins │ v1.35.0 │ 22 Dec 25 01:16 UTC │ 22 Dec 25 01:16 UTC │
	│ start   │ -p stopped-upgrade-608183 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ stopped-upgrade-608183    │ jenkins │ v1.37.0 │ 22 Dec 25 01:16 UTC │ 22 Dec 25 01:20 UTC │
	│ delete  │ -p stopped-upgrade-608183                                                                                                                     │ stopped-upgrade-608183    │ jenkins │ v1.37.0 │ 22 Dec 25 01:20 UTC │ 22 Dec 25 01:20 UTC │
	│ start   │ -p running-upgrade-834402 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ running-upgrade-834402    │ jenkins │ v1.35.0 │ 22 Dec 25 01:20 UTC │ 22 Dec 25 01:21 UTC │
	│ start   │ -p running-upgrade-834402 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ running-upgrade-834402    │ jenkins │ v1.37.0 │ 22 Dec 25 01:21 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p running-upgrade-834402                                                                                                                     │ running-upgrade-834402    │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ start   │ -p test-preload-488704 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                  │ test-preload-488704       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ image   │ test-preload-488704 image pull public.ecr.aws/docker/library/busybox:latest                                                                   │ test-preload-488704       │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ stop    │ -p test-preload-488704                                                                                                                        │ test-preload-488704       │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p test-preload-488704 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                            │ test-preload-488704       │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:27 UTC │
	│ image   │ test-preload-488704 image list                                                                                                                │ test-preload-488704       │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ delete  │ -p test-preload-488704                                                                                                                        │ test-preload-488704       │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p force-systemd-env-464544 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ force-systemd-env-464544  │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p force-systemd-env-464544                                                                                                                   │ force-systemd-env-464544  │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p NoKubernetes-346588 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                 │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ start   │ -p NoKubernetes-346588 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                         │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:28:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:28:33.944515 2091401 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:28:33.944724 2091401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:33.944762 2091401 out.go:374] Setting ErrFile to fd 2...
	I1222 01:28:33.944782 2091401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:33.945058 2091401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:28:33.945489 2091401 out.go:368] Setting JSON to false
	I1222 01:28:33.953678 2091401 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115863,"bootTime":1766251051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:28:33.953804 2091401 start.go:143] virtualization:  
	I1222 01:28:33.959005 2091401 out.go:179] * [NoKubernetes-346588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:28:33.965557 2091401 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:28:33.965651 2091401 notify.go:221] Checking for updates...
	I1222 01:28:33.971888 2091401 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:28:33.974748 2091401 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:28:33.977507 2091401 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:28:33.980788 2091401 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:28:33.983662 2091401 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.950548566Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.950766061Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.950885319Z" level=info msg="Create NRI interface"
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.951056956Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.951302889Z" level=info msg="runtime interface created"
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.951404605Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.951474314Z" level=info msg="runtime interface starting up..."
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.951526439Z" level=info msg="starting plugins..."
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.951598125Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:16:19 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:16:19.951745969Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:16:19 kubernetes-upgrade-951043 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 22 01:20:28 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:20:28.167980439Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=48369fab-d56e-47e2-a697-cd383137389c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:20:28 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:20:28.168710944Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=e3ed120e-850c-4b77-8d45-5428e0f69a53 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:20:28 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:20:28.169162484Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=13769a21-d641-4906-b577-33d991a14646 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:20:28 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:20:28.169746014Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=7738b88f-e6f2-44c8-86b6-50938dcc854b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:20:28 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:20:28.170118015Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=232e567e-8331-4a81-8681-b9f5555f7f76 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:20:28 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:20:28.170483937Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=b6ef544a-a35f-4fac-bae7-64ff96fcada6 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:20:28 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:20:28.170921396Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e3dc1e59-f5c6-4103-8fd0-59363b395fb9 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:24:30 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:24:30.201959367Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=f5cdd4bf-e64f-4fdb-8971-9f98e66b0c3b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:24:30 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:24:30.20505191Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=d25d2dfe-a0de-44f7-b009-f275d01a0ea4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:24:30 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:24:30.205546434Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=2ddf5ead-419f-45af-9cc8-0fc5decdc71a name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:24:30 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:24:30.206149353Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ba0bb6c6-6c87-48c9-ab6a-0c5c027173d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:24:30 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:24:30.206629765Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1dc9ed01-c757-4cfc-ae7f-c13c58d08de1 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:24:30 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:24:30.207098461Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=6d0ba001-96fb-4494-945c-69ee03b3a210 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:24:30 kubernetes-upgrade-951043 crio[613]: time="2025-12-22T01:24:30.207618191Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e34e4659-37d3-47fc-bbcd-6380bef9e248 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.821564] overlayfs: idmapped layers are currently not supported
	[Dec22 00:55] overlayfs: idmapped layers are currently not supported
	[Dec22 00:56] overlayfs: idmapped layers are currently not supported
	[Dec22 00:57] overlayfs: idmapped layers are currently not supported
	[  +3.314214] overlayfs: idmapped layers are currently not supported
	[Dec22 00:58] overlayfs: idmapped layers are currently not supported
	[Dec22 00:59] overlayfs: idmapped layers are currently not supported
	[Dec22 01:00] overlayfs: idmapped layers are currently not supported
	[Dec22 01:04] overlayfs: idmapped layers are currently not supported
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:28:34 up 1 day,  8:11,  0 user,  load average: 2.60, 2.15, 1.95
	Linux kubernetes-upgrade-951043 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:28:32 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:28:32 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 22 01:28:32 kubernetes-upgrade-951043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:28:32 kubernetes-upgrade-951043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:28:32 kubernetes-upgrade-951043 kubelet[12198]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:28:32 kubernetes-upgrade-951043 kubelet[12198]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:28:32 kubernetes-upgrade-951043 kubelet[12198]: E1222 01:28:32.938099   12198 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:28:32 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:28:32 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:28:33 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 22 01:28:33 kubernetes-upgrade-951043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:28:33 kubernetes-upgrade-951043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:28:33 kubernetes-upgrade-951043 kubelet[12204]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:28:33 kubernetes-upgrade-951043 kubelet[12204]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:28:33 kubernetes-upgrade-951043 kubelet[12204]: E1222 01:28:33.702357   12204 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:28:33 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:28:33 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:28:34 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 22 01:28:34 kubernetes-upgrade-951043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:28:34 kubernetes-upgrade-951043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:28:34 kubernetes-upgrade-951043 kubelet[12232]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:28:34 kubernetes-upgrade-951043 kubelet[12232]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:28:34 kubernetes-upgrade-951043 kubelet[12232]: E1222 01:28:34.473076   12232 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:28:34 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:28:34 kubernetes-upgrade-951043 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-951043 -n kubernetes-upgrade-951043
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-951043 -n kubernetes-upgrade-951043: exit status 2 (442.197446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-951043" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-951043" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-951043
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-951043: (2.673439925s)
--- FAIL: TestKubernetesUpgrade (790.68s)

                                                
                                    
x
+
TestPause/serial/Pause (9.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-583340 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-583340 --alsologtostderr -v=5: exit status 80 (1.98293675s)

                                                
                                                
-- stdout --
	* Pausing node pause-583340 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:15:15.672946 2047476 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:15:15.675308 2047476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:15:15.675583 2047476 out.go:374] Setting ErrFile to fd 2...
	I1222 01:15:15.675621 2047476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:15:15.675940 2047476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:15:15.676255 2047476 out.go:368] Setting JSON to false
	I1222 01:15:15.676303 2047476 mustload.go:66] Loading cluster: pause-583340
	I1222 01:15:15.676788 2047476 config.go:182] Loaded profile config "pause-583340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:15:15.677278 2047476 cli_runner.go:164] Run: docker container inspect pause-583340 --format={{.State.Status}}
	I1222 01:15:15.702666 2047476 host.go:66] Checking if "pause-583340" exists ...
	I1222 01:15:15.702972 2047476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:15:15.807975 2047476 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-22 01:15:15.798162167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:15:15.808653 2047476 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-583340 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification
:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1222 01:15:15.812337 2047476 out.go:179] * Pausing node pause-583340 ... 
	I1222 01:15:15.816164 2047476 host.go:66] Checking if "pause-583340" exists ...
	I1222 01:15:15.816526 2047476 ssh_runner.go:195] Run: systemctl --version
	I1222 01:15:15.816581 2047476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:15:15.833984 2047476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:15:15.938836 2047476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:15:15.952056 2047476 pause.go:52] kubelet running: true
	I1222 01:15:15.952126 2047476 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:15:16.177182 2047476 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:15:16.177261 2047476 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:15:16.268288 2047476 cri.go:96] found id: "0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814"
	I1222 01:15:16.268310 2047476 cri.go:96] found id: "4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc"
	I1222 01:15:16.268315 2047476 cri.go:96] found id: "2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e"
	I1222 01:15:16.268318 2047476 cri.go:96] found id: "a5bf8f4c8d241d96ef93e334dc05a1cdf4eb294ea5b8916776586971f60054be"
	I1222 01:15:16.268322 2047476 cri.go:96] found id: "360534ca8ae0ddd9193a87336eb1e700824be47538456a2172361591d20fdd15"
	I1222 01:15:16.268325 2047476 cri.go:96] found id: "53f57d7cd64676b980d9c9d9c9843515e2f0726aa6640a5bdb474959c5074223"
	I1222 01:15:16.268328 2047476 cri.go:96] found id: "d986846af4e9c4f8639a53c9046584c7dabd234f69aad5a9cf63520365ad4dca"
	I1222 01:15:16.268331 2047476 cri.go:96] found id: "5a5697ce35641b6523b23f1835ed38471d05acae1a2921520161c873e721fabb"
	I1222 01:15:16.268334 2047476 cri.go:96] found id: "d1433bb270cb8215342f9ee03eba858ebc0bff54f87693fec5575b4655d4dfa0"
	I1222 01:15:16.268342 2047476 cri.go:96] found id: "e8bde2a9b810e4304e64db22e4b461d56e70c2f35c0012314c20f357ef18c242"
	I1222 01:15:16.268345 2047476 cri.go:96] found id: "e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea"
	I1222 01:15:16.268348 2047476 cri.go:96] found id: "75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9"
	I1222 01:15:16.268351 2047476 cri.go:96] found id: "1704cd445120eba334cf13f2046f9af5f8dbc91bd484b46f2b98f576b2c49ae2"
	I1222 01:15:16.268354 2047476 cri.go:96] found id: "bc34fa5a980aea7a90312cc2cd3c62d2994e3be1c3b9e8cb44a2834dca4c396a"
	I1222 01:15:16.268357 2047476 cri.go:96] found id: ""
	I1222 01:15:16.268405 2047476 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:15:16.281179 2047476 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:15:16Z" level=error msg="open /run/runc: no such file or directory"
	I1222 01:15:16.417529 2047476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:15:16.434545 2047476 pause.go:52] kubelet running: false
	I1222 01:15:16.434658 2047476 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:15:16.691081 2047476 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:15:16.691207 2047476 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:15:16.817951 2047476 cri.go:96] found id: "0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814"
	I1222 01:15:16.818025 2047476 cri.go:96] found id: "4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc"
	I1222 01:15:16.818055 2047476 cri.go:96] found id: "2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e"
	I1222 01:15:16.818072 2047476 cri.go:96] found id: "a5bf8f4c8d241d96ef93e334dc05a1cdf4eb294ea5b8916776586971f60054be"
	I1222 01:15:16.818094 2047476 cri.go:96] found id: "360534ca8ae0ddd9193a87336eb1e700824be47538456a2172361591d20fdd15"
	I1222 01:15:16.818131 2047476 cri.go:96] found id: "53f57d7cd64676b980d9c9d9c9843515e2f0726aa6640a5bdb474959c5074223"
	I1222 01:15:16.818148 2047476 cri.go:96] found id: "d986846af4e9c4f8639a53c9046584c7dabd234f69aad5a9cf63520365ad4dca"
	I1222 01:15:16.818164 2047476 cri.go:96] found id: "5a5697ce35641b6523b23f1835ed38471d05acae1a2921520161c873e721fabb"
	I1222 01:15:16.818182 2047476 cri.go:96] found id: "d1433bb270cb8215342f9ee03eba858ebc0bff54f87693fec5575b4655d4dfa0"
	I1222 01:15:16.818211 2047476 cri.go:96] found id: "e8bde2a9b810e4304e64db22e4b461d56e70c2f35c0012314c20f357ef18c242"
	I1222 01:15:16.818227 2047476 cri.go:96] found id: "e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea"
	I1222 01:15:16.818243 2047476 cri.go:96] found id: "75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9"
	I1222 01:15:16.818260 2047476 cri.go:96] found id: "1704cd445120eba334cf13f2046f9af5f8dbc91bd484b46f2b98f576b2c49ae2"
	I1222 01:15:16.818286 2047476 cri.go:96] found id: "bc34fa5a980aea7a90312cc2cd3c62d2994e3be1c3b9e8cb44a2834dca4c396a"
	I1222 01:15:16.818303 2047476 cri.go:96] found id: ""
	I1222 01:15:16.818383 2047476 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:15:17.079063 2047476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:15:17.102083 2047476 pause.go:52] kubelet running: false
	I1222 01:15:17.102199 2047476 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:15:17.342205 2047476 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:15:17.342355 2047476 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:15:17.516260 2047476 cri.go:96] found id: "0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814"
	I1222 01:15:17.516336 2047476 cri.go:96] found id: "4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc"
	I1222 01:15:17.516354 2047476 cri.go:96] found id: "2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e"
	I1222 01:15:17.516370 2047476 cri.go:96] found id: "a5bf8f4c8d241d96ef93e334dc05a1cdf4eb294ea5b8916776586971f60054be"
	I1222 01:15:17.516387 2047476 cri.go:96] found id: "360534ca8ae0ddd9193a87336eb1e700824be47538456a2172361591d20fdd15"
	I1222 01:15:17.516417 2047476 cri.go:96] found id: "53f57d7cd64676b980d9c9d9c9843515e2f0726aa6640a5bdb474959c5074223"
	I1222 01:15:17.516434 2047476 cri.go:96] found id: "d986846af4e9c4f8639a53c9046584c7dabd234f69aad5a9cf63520365ad4dca"
	I1222 01:15:17.516449 2047476 cri.go:96] found id: "5a5697ce35641b6523b23f1835ed38471d05acae1a2921520161c873e721fabb"
	I1222 01:15:17.516465 2047476 cri.go:96] found id: "d1433bb270cb8215342f9ee03eba858ebc0bff54f87693fec5575b4655d4dfa0"
	I1222 01:15:17.516492 2047476 cri.go:96] found id: "e8bde2a9b810e4304e64db22e4b461d56e70c2f35c0012314c20f357ef18c242"
	I1222 01:15:17.516514 2047476 cri.go:96] found id: "e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea"
	I1222 01:15:17.516530 2047476 cri.go:96] found id: "75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9"
	I1222 01:15:17.516545 2047476 cri.go:96] found id: "1704cd445120eba334cf13f2046f9af5f8dbc91bd484b46f2b98f576b2c49ae2"
	I1222 01:15:17.516568 2047476 cri.go:96] found id: "bc34fa5a980aea7a90312cc2cd3c62d2994e3be1c3b9e8cb44a2834dca4c396a"
	I1222 01:15:17.516587 2047476 cri.go:96] found id: ""
	I1222 01:15:17.516667 2047476 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:15:17.537088 2047476 out.go:203] 
	W1222 01:15:17.540150 2047476 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:15:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:15:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 01:15:17.540335 2047476 out.go:285] * 
	* 
	W1222 01:15:17.562960 2047476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:15:17.566178 2047476 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-583340 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-583340
helpers_test.go:244: (dbg) docker inspect pause-583340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c",
	        "Created": "2025-12-22T01:13:46.479251362Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2039493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:13:49.217860819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c/hostname",
	        "HostsPath": "/var/lib/docker/containers/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c/hosts",
	        "LogPath": "/var/lib/docker/containers/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c-json.log",
	        "Name": "/pause-583340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-583340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-583340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c",
	                "LowerDir": "/var/lib/docker/overlay2/a74ce76b035d0d66089b6abb479f9ea7df1f8425509ec5288babd3897e6bfc92-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a74ce76b035d0d66089b6abb479f9ea7df1f8425509ec5288babd3897e6bfc92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a74ce76b035d0d66089b6abb479f9ea7df1f8425509ec5288babd3897e6bfc92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a74ce76b035d0d66089b6abb479f9ea7df1f8425509ec5288babd3897e6bfc92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-583340",
	                "Source": "/var/lib/docker/volumes/pause-583340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-583340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-583340",
	                "name.minikube.sigs.k8s.io": "pause-583340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac3ebb6fcf5970862acf821fc7fcc1b0bee98f13b17797f58852a0a6460e910a",
	            "SandboxKey": "/var/run/docker/netns/ac3ebb6fcf59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38656"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38657"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38660"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38658"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38659"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-583340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:3b:da:b5:af:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1f7dfb485d3424cdc12779a2f73c16ee78d9ce44cbbe257bb9cc718d35f48194",
	                    "EndpointID": "9823ea3de2b1346481ab23ff825c8a98c5838e6937aab36574bec7d7eebb2631",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-583340",
	                        "4374be10b56d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-583340 -n pause-583340
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-583340 -n pause-583340: exit status 2 (520.955453ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-583340 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-583340 logs -n 25: (2.049941261s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-171197                                                                                         │ multinode-171197            │ jenkins │ v1.37.0 │ 22 Dec 25 01:10 UTC │                     │
	│ start   │ -p multinode-171197-m02 --driver=docker  --container-runtime=crio                                                │ multinode-171197-m02        │ jenkins │ v1.37.0 │ 22 Dec 25 01:10 UTC │                     │
	│ start   │ -p multinode-171197-m03 --driver=docker  --container-runtime=crio                                                │ multinode-171197-m03        │ jenkins │ v1.37.0 │ 22 Dec 25 01:10 UTC │ 22 Dec 25 01:11 UTC │
	│ node    │ add -p multinode-171197                                                                                          │ multinode-171197            │ jenkins │ v1.37.0 │ 22 Dec 25 01:11 UTC │                     │
	│ delete  │ -p multinode-171197-m03                                                                                          │ multinode-171197-m03        │ jenkins │ v1.37.0 │ 22 Dec 25 01:11 UTC │ 22 Dec 25 01:11 UTC │
	│ delete  │ -p multinode-171197                                                                                              │ multinode-171197            │ jenkins │ v1.37.0 │ 22 Dec 25 01:11 UTC │ 22 Dec 25 01:11 UTC │
	│ start   │ -p scheduled-stop-389645 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:11 UTC │ 22 Dec 25 01:12 UTC │
	│ stop    │ -p scheduled-stop-389645 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --cancel-scheduled                                                                      │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │ 22 Dec 25 01:12 UTC │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │ 22 Dec 25 01:12 UTC │
	│ delete  │ -p scheduled-stop-389645                                                                                         │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:13 UTC │ 22 Dec 25 01:13 UTC │
	│ start   │ -p insufficient-storage-993583 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-993583 │ jenkins │ v1.37.0 │ 22 Dec 25 01:13 UTC │                     │
	│ delete  │ -p insufficient-storage-993583                                                                                   │ insufficient-storage-993583 │ jenkins │ v1.37.0 │ 22 Dec 25 01:13 UTC │ 22 Dec 25 01:13 UTC │
	│ start   │ -p pause-583340 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-583340                │ jenkins │ v1.37.0 │ 22 Dec 25 01:13 UTC │ 22 Dec 25 01:14 UTC │
	│ start   │ -p missing-upgrade-158754 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-158754      │ jenkins │ v1.35.0 │ 22 Dec 25 01:13 UTC │ 22 Dec 25 01:14 UTC │
	│ start   │ -p missing-upgrade-158754 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-158754      │ jenkins │ v1.37.0 │ 22 Dec 25 01:14 UTC │                     │
	│ start   │ -p pause-583340 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-583340                │ jenkins │ v1.37.0 │ 22 Dec 25 01:14 UTC │ 22 Dec 25 01:15 UTC │
	│ pause   │ -p pause-583340 --alsologtostderr -v=5                                                                           │ pause-583340                │ jenkins │ v1.37.0 │ 22 Dec 25 01:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:14:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:14:44.331609 2044197 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:14:44.331737 2044197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:14:44.331754 2044197 out.go:374] Setting ErrFile to fd 2...
	I1222 01:14:44.331766 2044197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:14:44.332137 2044197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:14:44.332579 2044197 out.go:368] Setting JSON to false
	I1222 01:14:44.333593 2044197 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115034,"bootTime":1766251051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:14:44.333690 2044197 start.go:143] virtualization:  
	I1222 01:14:44.336645 2044197 out.go:179] * [pause-583340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:14:44.338715 2044197 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:14:44.339561 2044197 notify.go:221] Checking for updates...
	I1222 01:14:44.344655 2044197 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:14:44.347620 2044197 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:14:44.350499 2044197 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:14:44.353460 2044197 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:14:44.356289 2044197 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:14:44.359706 2044197 config.go:182] Loaded profile config "pause-583340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:14:44.360885 2044197 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:14:44.384979 2044197 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:14:44.385112 2044197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:14:44.463922 2044197 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:14:44.454216519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:14:44.464040 2044197 docker.go:319] overlay module found
	I1222 01:14:44.467287 2044197 out.go:179] * Using the docker driver based on existing profile
	I1222 01:14:44.470122 2044197 start.go:309] selected driver: docker
	I1222 01:14:44.470143 2044197 start.go:928] validating driver "docker" against &{Name:pause-583340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:14:44.470297 2044197 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:14:44.470397 2044197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:14:44.531371 2044197 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:14:44.52243327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:14:44.531775 2044197 cni.go:84] Creating CNI manager for ""
	I1222 01:14:44.531838 2044197 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:14:44.531892 2044197 start.go:353] cluster config:
	{Name:pause-583340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:14:44.535156 2044197 out.go:179] * Starting "pause-583340" primary control-plane node in "pause-583340" cluster
	I1222 01:14:44.538441 2044197 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:14:44.541312 2044197 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:14:44.544176 2044197 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:14:44.544226 2044197 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:14:44.544256 2044197 cache.go:65] Caching tarball of preloaded images
	I1222 01:14:44.544262 2044197 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:14:44.544345 2044197 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:14:44.544355 2044197 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:14:44.544478 2044197 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/config.json ...
	I1222 01:14:44.564784 2044197 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:14:44.564808 2044197 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:14:44.564823 2044197 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:14:44.564852 2044197 start.go:360] acquireMachinesLock for pause-583340: {Name:mk537b6ea77b948b0f7bd76cad679f7f864f9ec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:14:44.564911 2044197 start.go:364] duration metric: took 35.912µs to acquireMachinesLock for "pause-583340"
	I1222 01:14:44.564935 2044197 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:14:44.564945 2044197 fix.go:54] fixHost starting: 
	I1222 01:14:44.565201 2044197 cli_runner.go:164] Run: docker container inspect pause-583340 --format={{.State.Status}}
	I1222 01:14:44.581907 2044197 fix.go:112] recreateIfNeeded on pause-583340: state=Running err=<nil>
	W1222 01:14:44.581950 2044197 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:14:44.202800 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:14:44.224580 2043948 cli_runner.go:211] docker container inspect missing-upgrade-158754 --format={{.State.Status}} returned with exit code 1
	I1222 01:14:44.224643 2043948 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:44.224652 2043948 oci.go:673] temporary error: container missing-upgrade-158754 status is  but expect it to be exited
	I1222 01:14:45.739995 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:14:45.765162 2043948 cli_runner.go:211] docker container inspect missing-upgrade-158754 --format={{.State.Status}} returned with exit code 1
	I1222 01:14:45.765230 2043948 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:45.765242 2043948 oci.go:673] temporary error: container missing-upgrade-158754 status is  but expect it to be exited
	I1222 01:14:44.585101 2044197 out.go:252] * Updating the running docker "pause-583340" container ...
	I1222 01:14:44.585137 2044197 machine.go:94] provisionDockerMachine start ...
	I1222 01:14:44.585220 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:44.602379 2044197 main.go:144] libmachine: Using SSH client type: native
	I1222 01:14:44.602710 2044197 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38656 <nil> <nil>}
	I1222 01:14:44.602726 2044197 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:14:44.745976 2044197 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-583340
	
	I1222 01:14:44.746003 2044197 ubuntu.go:182] provisioning hostname "pause-583340"
	I1222 01:14:44.746077 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:44.763793 2044197 main.go:144] libmachine: Using SSH client type: native
	I1222 01:14:44.764098 2044197 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38656 <nil> <nil>}
	I1222 01:14:44.764109 2044197 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-583340 && echo "pause-583340" | sudo tee /etc/hostname
	I1222 01:14:44.916630 2044197 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-583340
	
	I1222 01:14:44.916714 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:44.934540 2044197 main.go:144] libmachine: Using SSH client type: native
	I1222 01:14:44.934853 2044197 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38656 <nil> <nil>}
	I1222 01:14:44.934877 2044197 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-583340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-583340/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-583340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:14:45.103878 2044197 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:14:45.103907 2044197 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:14:45.103949 2044197 ubuntu.go:190] setting up certificates
	I1222 01:14:45.103971 2044197 provision.go:84] configureAuth start
	I1222 01:14:45.104159 2044197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-583340
	I1222 01:14:45.125837 2044197 provision.go:143] copyHostCerts
	I1222 01:14:45.125921 2044197 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:14:45.125948 2044197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:14:45.126036 2044197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:14:45.126174 2044197 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:14:45.126187 2044197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:14:45.126220 2044197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:14:45.126300 2044197 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:14:45.126311 2044197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:14:45.126339 2044197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:14:45.126408 2044197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.pause-583340 san=[127.0.0.1 192.168.76.2 localhost minikube pause-583340]
	I1222 01:14:45.270053 2044197 provision.go:177] copyRemoteCerts
	I1222 01:14:45.270220 2044197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:14:45.270388 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:45.288361 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:45.395036 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:14:45.416230 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1222 01:14:45.435176 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:14:45.455330 2044197 provision.go:87] duration metric: took 351.345264ms to configureAuth
	I1222 01:14:45.455370 2044197 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:14:45.455596 2044197 config.go:182] Loaded profile config "pause-583340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:14:45.455712 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:45.473412 2044197 main.go:144] libmachine: Using SSH client type: native
	I1222 01:14:45.473807 2044197 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38656 <nil> <nil>}
	I1222 01:14:45.473829 2044197 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:14:51.020750 2044197 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:14:51.020780 2044197 machine.go:97] duration metric: took 6.435632838s to provisionDockerMachine
	I1222 01:14:51.020792 2044197 start.go:293] postStartSetup for "pause-583340" (driver="docker")
	I1222 01:14:51.020803 2044197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:14:51.020867 2044197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:14:51.020925 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:51.042068 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:51.137835 2044197 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:14:51.141516 2044197 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:14:51.141544 2044197 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:14:51.141555 2044197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:14:51.141614 2044197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:14:51.141695 2044197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:14:51.142445 2044197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:14:51.150356 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:14:51.168412 2044197 start.go:296] duration metric: took 147.604392ms for postStartSetup
	I1222 01:14:51.168511 2044197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:14:51.168561 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:51.185653 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:51.279201 2044197 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:14:51.284006 2044197 fix.go:56] duration metric: took 6.719054259s for fixHost
	I1222 01:14:51.284030 2044197 start.go:83] releasing machines lock for "pause-583340", held for 6.719106523s
	I1222 01:14:51.284110 2044197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-583340
	I1222 01:14:51.302487 2044197 ssh_runner.go:195] Run: cat /version.json
	I1222 01:14:51.302503 2044197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:14:51.302548 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:51.302568 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:51.322472 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:51.329453 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:51.545155 2044197 ssh_runner.go:195] Run: systemctl --version
	I1222 01:14:51.551335 2044197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:14:51.644905 2044197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:14:51.649169 2044197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:14:51.649256 2044197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:14:51.659511 2044197 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:14:51.659535 2044197 start.go:496] detecting cgroup driver to use...
	I1222 01:14:51.659584 2044197 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:14:51.659656 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:14:51.674804 2044197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:14:51.688005 2044197 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:14:51.688100 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:14:51.704038 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:14:51.717392 2044197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:14:51.857869 2044197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:14:51.992608 2044197 docker.go:234] disabling docker service ...
	I1222 01:14:51.992713 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:14:52.009176 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:14:52.023622 2044197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:14:52.155267 2044197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:14:52.295268 2044197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:14:52.309117 2044197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:14:52.323765 2044197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:14:52.323876 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.333062 2044197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:14:52.333154 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.343018 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.351788 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.360401 2044197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:14:52.368905 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.377743 2044197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.386237 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.395237 2044197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:14:52.403890 2044197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:14:52.410975 2044197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:14:52.551499 2044197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:14:52.762691 2044197 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:14:52.762761 2044197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:14:52.766649 2044197 start.go:564] Will wait 60s for crictl version
	I1222 01:14:52.766715 2044197 ssh_runner.go:195] Run: which crictl
	I1222 01:14:52.770409 2044197 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:14:52.800742 2044197 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:14:52.800833 2044197 ssh_runner.go:195] Run: crio --version
	I1222 01:14:52.830111 2044197 ssh_runner.go:195] Run: crio --version
	I1222 01:14:52.869084 2044197 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:14:52.872094 2044197 cli_runner.go:164] Run: docker network inspect pause-583340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:14:52.888809 2044197 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:14:52.892995 2044197 kubeadm.go:884] updating cluster {Name:pause-583340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:14:52.893128 2044197 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:14:52.893192 2044197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:14:52.926955 2044197 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:14:52.926980 2044197 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:14:52.927035 2044197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:14:52.955434 2044197 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:14:52.955456 2044197 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:14:52.955470 2044197 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1222 01:14:52.955568 2044197 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-583340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:14:52.955646 2044197 ssh_runner.go:195] Run: crio config
	I1222 01:14:53.027776 2044197 cni.go:84] Creating CNI manager for ""
	I1222 01:14:53.027800 2044197 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:14:53.027820 2044197 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:14:53.027842 2044197 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-583340 NodeName:pause-583340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:14:53.027975 2044197 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-583340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:14:53.028051 2044197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:14:53.036990 2044197 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:14:53.037057 2044197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:14:53.044619 2044197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1222 01:14:53.057220 2044197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:14:53.069928 2044197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1222 01:14:53.084406 2044197 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:14:53.088525 2044197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:14:53.241441 2044197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:14:53.255295 2044197 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340 for IP: 192.168.76.2
	I1222 01:14:53.255315 2044197 certs.go:195] generating shared ca certs ...
	I1222 01:14:53.255331 2044197 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:14:53.255493 2044197 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:14:53.255544 2044197 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:14:53.255557 2044197 certs.go:257] generating profile certs ...
	I1222 01:14:53.255646 2044197 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.key
	I1222 01:14:53.256496 2044197 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/apiserver.key.edbfcaf6
	I1222 01:14:53.257315 2044197 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/proxy-client.key
	I1222 01:14:53.257540 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:14:53.257585 2044197 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:14:53.257598 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:14:53.257623 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:14:53.257651 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:14:53.257682 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:14:53.257735 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:14:53.258370 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:14:53.276899 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:14:53.295039 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:14:53.313782 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:14:53.331286 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1222 01:14:53.348294 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:14:53.364993 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:14:53.385049 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:14:53.403281 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:14:53.419705 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:14:53.436243 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:14:53.452529 2044197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:14:53.464611 2044197 ssh_runner.go:195] Run: openssl version
	I1222 01:14:53.474407 2044197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:14:53.482713 2044197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:14:53.490666 2044197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:14:53.495082 2044197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:14:53.495145 2044197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:14:53.537910 2044197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:14:53.545378 2044197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:14:53.553189 2044197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:14:53.561350 2044197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:14:53.565209 2044197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:14:53.565294 2044197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:14:53.609663 2044197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:14:53.617158 2044197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:14:53.624509 2044197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:14:53.631460 2044197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:14:53.634987 2044197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:14:53.635057 2044197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:14:53.680484 2044197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:14:53.688094 2044197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:14:53.692652 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:14:53.738660 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:14:53.784479 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:14:53.830307 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:14:53.873293 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:14:53.914826 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:14:53.955907 2044197 kubeadm.go:401] StartCluster: {Name:pause-583340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:14:53.956029 2044197 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:14:53.956122 2044197 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:14:53.982311 2044197 cri.go:96] found id: "5a5697ce35641b6523b23f1835ed38471d05acae1a2921520161c873e721fabb"
	I1222 01:14:53.982334 2044197 cri.go:96] found id: "d1433bb270cb8215342f9ee03eba858ebc0bff54f87693fec5575b4655d4dfa0"
	I1222 01:14:53.982339 2044197 cri.go:96] found id: "e8bde2a9b810e4304e64db22e4b461d56e70c2f35c0012314c20f357ef18c242"
	I1222 01:14:53.982343 2044197 cri.go:96] found id: "e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea"
	I1222 01:14:53.982346 2044197 cri.go:96] found id: "75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9"
	I1222 01:14:53.982349 2044197 cri.go:96] found id: "1704cd445120eba334cf13f2046f9af5f8dbc91bd484b46f2b98f576b2c49ae2"
	I1222 01:14:53.982352 2044197 cri.go:96] found id: "bc34fa5a980aea7a90312cc2cd3c62d2994e3be1c3b9e8cb44a2834dca4c396a"
	I1222 01:14:53.982355 2044197 cri.go:96] found id: ""
	I1222 01:14:53.982421 2044197 ssh_runner.go:195] Run: sudo runc list -f json
	W1222 01:14:54.010784 2044197 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:14:53Z" level=error msg="open /run/runc: no such file or directory"
	I1222 01:14:54.010901 2044197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:14:54.019779 2044197 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:14:54.019800 2044197 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:14:54.019896 2044197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:14:54.028013 2044197 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:14:54.028670 2044197 kubeconfig.go:125] found "pause-583340" server: "https://192.168.76.2:8443"
	I1222 01:14:54.029599 2044197 kapi.go:59] client config for pause-583340: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 01:14:54.030123 2044197 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 01:14:54.030143 2044197 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 01:14:54.030149 2044197 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 01:14:54.030154 2044197 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 01:14:54.030162 2044197 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 01:14:54.030441 2044197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:14:54.040195 2044197 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:14:54.040268 2044197 kubeadm.go:602] duration metric: took 20.461561ms to restartPrimaryControlPlane
	I1222 01:14:54.040287 2044197 kubeadm.go:403] duration metric: took 84.390935ms to StartCluster
	I1222 01:14:54.040302 2044197 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:14:54.040384 2044197 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:14:54.041439 2044197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:14:54.041666 2044197 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:14:54.042070 2044197 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:14:54.042198 2044197 config.go:182] Loaded profile config "pause-583340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:14:54.048075 2044197 out.go:179] * Verifying Kubernetes components...
	I1222 01:14:54.048075 2044197 out.go:179] * Enabled addons: 
	I1222 01:14:50.361994 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:14:50.378031 2043948 cli_runner.go:211] docker container inspect missing-upgrade-158754 --format={{.State.Status}} returned with exit code 1
	I1222 01:14:50.378094 2043948 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:50.378107 2043948 oci.go:673] temporary error: container missing-upgrade-158754 status is  but expect it to be exited
	I1222 01:14:50.378170 2043948 retry.go:84] will retry after 4.3s: couldn't verify container is exited. %v: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:54.050984 2044197 addons.go:530] duration metric: took 8.916043ms for enable addons: enabled=[]
	I1222 01:14:54.051103 2044197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:14:54.183323 2044197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:14:54.197920 2044197 node_ready.go:35] waiting up to 6m0s for node "pause-583340" to be "Ready" ...
	I1222 01:14:54.648926 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:14:54.666474 2043948 cli_runner.go:211] docker container inspect missing-upgrade-158754 --format={{.State.Status}} returned with exit code 1
	I1222 01:14:54.666542 2043948 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:54.666555 2043948 oci.go:673] temporary error: container missing-upgrade-158754 status is  but expect it to be exited
	I1222 01:14:54.666588 2043948 oci.go:88] couldn't shut down missing-upgrade-158754 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	 
	I1222 01:14:54.666649 2043948 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-158754
	I1222 01:14:54.681392 2043948 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-158754
	W1222 01:14:54.701614 2043948 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-158754 returned with exit code 1
	I1222 01:14:54.701727 2043948 cli_runner.go:164] Run: docker network inspect missing-upgrade-158754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:14:54.717302 2043948 cli_runner.go:164] Run: docker network rm missing-upgrade-158754
	I1222 01:14:54.830184 2043948 fix.go:124] Sleeping 1 second for extra luck!
	I1222 01:14:55.830459 2043948 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:14:55.834219 2043948 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:14:55.834350 2043948 start.go:159] libmachine.API.Create for "missing-upgrade-158754" (driver="docker")
	I1222 01:14:55.834389 2043948 client.go:173] LocalClient.Create starting
	I1222 01:14:55.834507 2043948 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:14:55.834543 2043948 main.go:144] libmachine: Decoding PEM data...
	I1222 01:14:55.834568 2043948 main.go:144] libmachine: Parsing certificate...
	I1222 01:14:55.834625 2043948 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:14:55.834648 2043948 main.go:144] libmachine: Decoding PEM data...
	I1222 01:14:55.834671 2043948 main.go:144] libmachine: Parsing certificate...
	I1222 01:14:55.834940 2043948 cli_runner.go:164] Run: docker network inspect missing-upgrade-158754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:14:55.854428 2043948 cli_runner.go:211] docker network inspect missing-upgrade-158754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:14:55.854508 2043948 network_create.go:284] running [docker network inspect missing-upgrade-158754] to gather additional debugging logs...
	I1222 01:14:55.854525 2043948 cli_runner.go:164] Run: docker network inspect missing-upgrade-158754
	W1222 01:14:55.881555 2043948 cli_runner.go:211] docker network inspect missing-upgrade-158754 returned with exit code 1
	I1222 01:14:55.881591 2043948 network_create.go:287] error running [docker network inspect missing-upgrade-158754]: docker network inspect missing-upgrade-158754: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-158754 not found
	I1222 01:14:55.881606 2043948 network_create.go:289] output of [docker network inspect missing-upgrade-158754]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-158754 not found
	
	** /stderr **
	I1222 01:14:55.881730 2043948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:14:55.921292 2043948 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:14:55.921629 2043948 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:14:55.921953 2043948 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:14:55.922289 2043948 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f7dfb485d34 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:c1:7c:67:4e:65} reservation:<nil>}
	I1222 01:14:55.922715 2043948 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b829f0}
	I1222 01:14:55.922752 2043948 network_create.go:124] attempt to create docker network missing-upgrade-158754 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:14:55.922810 2043948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-158754 missing-upgrade-158754
	I1222 01:14:56.004672 2043948 network_create.go:108] docker network missing-upgrade-158754 192.168.85.0/24 created
	I1222 01:14:56.004722 2043948 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-158754" container
	I1222 01:14:56.004819 2043948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:14:56.046418 2043948 cli_runner.go:164] Run: docker volume create missing-upgrade-158754 --label name.minikube.sigs.k8s.io=missing-upgrade-158754 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:14:56.073583 2043948 oci.go:103] Successfully created a docker volume missing-upgrade-158754
	I1222 01:14:56.073689 2043948 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-158754-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-158754 --entrypoint /usr/bin/test -v missing-upgrade-158754:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1222 01:14:56.571638 2043948 oci.go:107] Successfully prepared a docker volume missing-upgrade-158754
	I1222 01:14:56.571698 2043948 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1222 01:14:56.571708 2043948 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:14:56.571772 2043948 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-158754:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:15:03.055612 2043948 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-158754:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (6.483804745s)
	I1222 01:15:03.055683 2043948 kic.go:203] duration metric: took 6.483970557s to extract preloaded images to volume ...
	W1222 01:15:03.055822 2043948 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:15:03.055933 2043948 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:15:03.160031 2043948 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-158754 --name missing-upgrade-158754 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-158754 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-158754 --network missing-upgrade-158754 --ip 192.168.85.2 --volume missing-upgrade-158754:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1222 01:15:03.550022 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Running}}
	I1222 01:15:03.584511 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	I1222 01:15:03.612143 2043948 cli_runner.go:164] Run: docker exec missing-upgrade-158754 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:15:03.669464 2043948 oci.go:144] the created container "missing-upgrade-158754" has a running status.
	I1222 01:15:03.669492 2043948 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa...
	I1222 01:15:03.806541 2043948 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:15:03.838013 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	I1222 01:15:03.864590 2043948 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:15:03.864609 2043948 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-158754 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:15:03.927032 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	I1222 01:15:03.950226 2043948 machine.go:94] provisionDockerMachine start ...
	I1222 01:15:03.950315 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:03.971555 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:03.973913 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:03.973966 2043948 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:15:03.975455 2043948 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:15:02.353045 2044197 node_ready.go:49] node "pause-583340" is "Ready"
	I1222 01:15:02.353079 2044197 node_ready.go:38] duration metric: took 8.155126141s for node "pause-583340" to be "Ready" ...
	I1222 01:15:02.353100 2044197 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:15:02.353163 2044197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:15:02.367852 2044197 api_server.go:72] duration metric: took 8.326148435s to wait for apiserver process to appear ...
	I1222 01:15:02.367878 2044197 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:15:02.367898 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:02.430748 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:15:02.430783 2044197 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:15:02.868191 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:02.878539 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:15:02.878572 2044197 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:15:03.368139 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:03.394180 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:15:03.394209 2044197 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:15:03.868365 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:03.897346 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:15:03.897401 2044197 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:15:07.100886 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-158754
	
	I1222 01:15:07.100956 2043948 ubuntu.go:182] provisioning hostname "missing-upgrade-158754"
	I1222 01:15:07.101060 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:07.118252 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:07.118569 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:07.118587 2043948 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-158754 && echo "missing-upgrade-158754" | sudo tee /etc/hostname
	I1222 01:15:07.253877 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-158754
	
	I1222 01:15:07.253978 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:07.273078 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:07.273444 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:07.273469 2043948 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-158754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-158754/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-158754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:15:07.401448 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:15:07.401471 2043948 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:15:07.401501 2043948 ubuntu.go:190] setting up certificates
	I1222 01:15:07.401516 2043948 provision.go:84] configureAuth start
	I1222 01:15:07.401579 2043948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-158754
	I1222 01:15:07.419722 2043948 provision.go:143] copyHostCerts
	I1222 01:15:07.419794 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:15:07.419806 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:15:07.419888 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:15:07.419995 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:15:07.420006 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:15:07.420071 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:15:07.420161 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:15:07.420173 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:15:07.420204 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:15:07.420273 2043948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-158754 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-158754]
	I1222 01:15:07.579332 2043948 provision.go:177] copyRemoteCerts
	I1222 01:15:07.579404 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:15:07.579449 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:07.602205 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:07.694876 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:15:07.723295 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1222 01:15:07.751685 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:15:07.778844 2043948 provision.go:87] duration metric: took 377.311065ms to configureAuth
	I1222 01:15:07.778871 2043948 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:15:07.779059 2043948 config.go:182] Loaded profile config "missing-upgrade-158754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1222 01:15:07.779180 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:07.798484 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:07.798792 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:07.798812 2043948 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:15:08.100938 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:15:08.100968 2043948 machine.go:97] duration metric: took 4.15071834s to provisionDockerMachine
	I1222 01:15:08.100980 2043948 client.go:176] duration metric: took 12.266582218s to LocalClient.Create
	I1222 01:15:08.100997 2043948 start.go:167] duration metric: took 12.266648169s to libmachine.API.Create "missing-upgrade-158754"
	I1222 01:15:08.101005 2043948 start.go:293] postStartSetup for "missing-upgrade-158754" (driver="docker")
	I1222 01:15:08.101018 2043948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:15:08.101100 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:15:08.101153 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.121911 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:08.218957 2043948 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:15:08.222118 2043948 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:15:08.222153 2043948 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1222 01:15:08.222164 2043948 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1222 01:15:08.222171 2043948 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1222 01:15:08.222181 2043948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:15:08.222242 2043948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:15:08.222332 2043948 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:15:08.222440 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:15:08.231432 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:15:08.256430 2043948 start.go:296] duration metric: took 155.40784ms for postStartSetup
	I1222 01:15:08.256815 2043948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-158754
	I1222 01:15:08.274420 2043948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/config.json ...
	I1222 01:15:08.274699 2043948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:15:08.274760 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.292121 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:08.378805 2043948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:15:08.383562 2043948 start.go:128] duration metric: took 12.553067968s to createHost
	I1222 01:15:08.383723 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:15:08.401469 2043948 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:15:08.401497 2043948 machine.go:94] provisionDockerMachine start ...
	I1222 01:15:08.401576 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.419156 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:08.419476 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:08.419491 2043948 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:15:08.548586 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-158754
	
	I1222 01:15:08.548612 2043948 ubuntu.go:182] provisioning hostname "missing-upgrade-158754"
	I1222 01:15:08.548688 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.570871 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:08.571245 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:08.571268 2043948 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-158754 && echo "missing-upgrade-158754" | sudo tee /etc/hostname
	I1222 01:15:08.705192 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-158754
	
	I1222 01:15:08.705276 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.724013 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:08.724377 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:08.724401 2043948 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-158754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-158754/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-158754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:15:08.849439 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:15:08.849467 2043948 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:15:08.849484 2043948 ubuntu.go:190] setting up certificates
	I1222 01:15:08.849496 2043948 provision.go:84] configureAuth start
	I1222 01:15:08.849556 2043948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-158754
	I1222 01:15:08.867656 2043948 provision.go:143] copyHostCerts
	I1222 01:15:08.867729 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:15:08.867744 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:15:08.867814 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:15:08.867913 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:15:08.867928 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:15:08.867958 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:15:08.868021 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:15:08.868030 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:15:08.868049 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:15:08.868100 2043948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-158754 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-158754]
	I1222 01:15:04.368114 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:04.377601 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1222 01:15:04.378971 2044197 api_server.go:141] control plane version: v1.34.3
	I1222 01:15:04.378997 2044197 api_server.go:131] duration metric: took 2.011111252s to wait for apiserver health ...
	I1222 01:15:04.379006 2044197 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:15:04.384052 2044197 system_pods.go:59] 7 kube-system pods found
	I1222 01:15:04.384089 2044197 system_pods.go:61] "coredns-66bc5c9577-q9vtv" [c0f72789-c82a-4c15-9a16-b5fe1719ee81] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:15:04.384102 2044197 system_pods.go:61] "etcd-pause-583340" [999b060b-d37b-4922-a6f8-71276b8cf904] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:15:04.384109 2044197 system_pods.go:61] "kindnet-tw999" [1f4faa7e-1c28-47ad-997f-e0f221fb7da9] Running
	I1222 01:15:04.384115 2044197 system_pods.go:61] "kube-apiserver-pause-583340" [66197fa8-541e-4d4e-ba96-4c264c138c03] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:15:04.384128 2044197 system_pods.go:61] "kube-controller-manager-pause-583340" [7c2ed5ea-dd6a-43de-8400-d40c0471b7eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:15:04.384135 2044197 system_pods.go:61] "kube-proxy-wtzct" [7b1b5156-bdf0-4050-bbde-959b400ff9c7] Running
	I1222 01:15:04.384141 2044197 system_pods.go:61] "kube-scheduler-pause-583340" [20f43e42-c175-4603-b1a0-6e498f3cc2c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:15:04.384150 2044197 system_pods.go:74] duration metric: took 5.138633ms to wait for pod list to return data ...
	I1222 01:15:04.384161 2044197 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:15:04.394597 2044197 default_sa.go:45] found service account: "default"
	I1222 01:15:04.394626 2044197 default_sa.go:55] duration metric: took 10.457797ms for default service account to be created ...
	I1222 01:15:04.394638 2044197 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:15:04.404616 2044197 system_pods.go:86] 7 kube-system pods found
	I1222 01:15:04.404750 2044197 system_pods.go:89] "coredns-66bc5c9577-q9vtv" [c0f72789-c82a-4c15-9a16-b5fe1719ee81] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:15:04.404776 2044197 system_pods.go:89] "etcd-pause-583340" [999b060b-d37b-4922-a6f8-71276b8cf904] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:15:04.404816 2044197 system_pods.go:89] "kindnet-tw999" [1f4faa7e-1c28-47ad-997f-e0f221fb7da9] Running
	I1222 01:15:04.404844 2044197 system_pods.go:89] "kube-apiserver-pause-583340" [66197fa8-541e-4d4e-ba96-4c264c138c03] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:15:04.404865 2044197 system_pods.go:89] "kube-controller-manager-pause-583340" [7c2ed5ea-dd6a-43de-8400-d40c0471b7eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:15:04.404897 2044197 system_pods.go:89] "kube-proxy-wtzct" [7b1b5156-bdf0-4050-bbde-959b400ff9c7] Running
	I1222 01:15:04.404921 2044197 system_pods.go:89] "kube-scheduler-pause-583340" [20f43e42-c175-4603-b1a0-6e498f3cc2c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:15:04.404941 2044197 system_pods.go:126] duration metric: took 10.296046ms to wait for k8s-apps to be running ...
	I1222 01:15:04.404975 2044197 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:15:04.405064 2044197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:15:04.431147 2044197 system_svc.go:56] duration metric: took 26.146491ms WaitForService to wait for kubelet
	I1222 01:15:04.431180 2044197 kubeadm.go:587] duration metric: took 10.389480678s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:15:04.431202 2044197 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:15:04.435554 2044197 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:15:04.435587 2044197 node_conditions.go:123] node cpu capacity is 2
	I1222 01:15:04.435601 2044197 node_conditions.go:105] duration metric: took 4.393351ms to run NodePressure ...
	I1222 01:15:04.435614 2044197 start.go:242] waiting for startup goroutines ...
	I1222 01:15:04.435622 2044197 start.go:247] waiting for cluster config update ...
	I1222 01:15:04.435630 2044197 start.go:256] writing updated cluster config ...
	I1222 01:15:04.435958 2044197 ssh_runner.go:195] Run: rm -f paused
	I1222 01:15:04.444837 2044197 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:15:04.445545 2044197 kapi.go:59] client config for pause-583340: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 01:15:04.448443 2044197 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q9vtv" in "kube-system" namespace to be "Ready" or be gone ...
	W1222 01:15:06.454183 2044197 pod_ready.go:104] pod "coredns-66bc5c9577-q9vtv" is not "Ready", error: <nil>
	I1222 01:15:07.954673 2044197 pod_ready.go:94] pod "coredns-66bc5c9577-q9vtv" is "Ready"
	I1222 01:15:07.954702 2044197 pod_ready.go:86] duration metric: took 3.506236882s for pod "coredns-66bc5c9577-q9vtv" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:07.958058 2044197 pod_ready.go:83] waiting for pod "etcd-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:09.224341 2043948 provision.go:177] copyRemoteCerts
	I1222 01:15:09.224409 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:15:09.224450 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.242907 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:09.334862 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1222 01:15:09.361207 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:15:09.386974 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:15:09.412399 2043948 provision.go:87] duration metric: took 562.887435ms to configureAuth
	I1222 01:15:09.412427 2043948 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:15:09.412651 2043948 config.go:182] Loaded profile config "missing-upgrade-158754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1222 01:15:09.412851 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.430484 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:09.430799 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:09.430818 2043948 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:15:09.704154 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:15:09.704174 2043948 machine.go:97] duration metric: took 1.302669347s to provisionDockerMachine
	I1222 01:15:09.704185 2043948 start.go:293] postStartSetup for "missing-upgrade-158754" (driver="docker")
	I1222 01:15:09.704197 2043948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:15:09.704265 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:15:09.704306 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.724769 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:09.819805 2043948 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:15:09.823305 2043948 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:15:09.823336 2043948 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1222 01:15:09.823346 2043948 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1222 01:15:09.823353 2043948 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1222 01:15:09.823363 2043948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:15:09.823430 2043948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:15:09.823511 2043948 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:15:09.823617 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:15:09.832390 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:15:09.858896 2043948 start.go:296] duration metric: took 154.695674ms for postStartSetup
	I1222 01:15:09.858977 2043948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:15:09.859019 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.877565 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:09.967723 2043948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:15:09.972300 2043948 fix.go:56] duration metric: took 30.540201566s for fixHost
	I1222 01:15:09.972335 2043948 start.go:83] releasing machines lock for "missing-upgrade-158754", held for 30.540263046s
	I1222 01:15:09.972423 2043948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-158754
	I1222 01:15:09.989776 2043948 ssh_runner.go:195] Run: cat /version.json
	I1222 01:15:09.989839 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.990096 2043948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:15:09.990159 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:10.010556 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:10.023368 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	W1222 01:15:10.113833 2043948 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1222 01:15:10.113953 2043948 ssh_runner.go:195] Run: systemctl --version
	I1222 01:15:10.279703 2043948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:15:10.425423 2043948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 01:15:10.429899 2043948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:15:10.453272 2043948 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1222 01:15:10.453395 2043948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:15:10.495885 2043948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1222 01:15:10.495909 2043948 start.go:496] detecting cgroup driver to use...
	I1222 01:15:10.495946 2043948 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:15:10.496015 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:15:10.517936 2043948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:15:10.530313 2043948 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:15:10.530433 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:15:10.544197 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:15:10.560299 2043948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:15:10.652121 2043948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:15:10.770046 2043948 docker.go:234] disabling docker service ...
	I1222 01:15:10.770113 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:15:10.793298 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:15:10.805307 2043948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:15:10.895648 2043948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:15:11.015548 2043948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:15:11.028545 2043948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:15:11.046259 2043948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1222 01:15:11.046324 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.056735 2043948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:15:11.056808 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.067340 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.079491 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.090978 2043948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:15:11.101416 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.112169 2043948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.129991 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.141202 2043948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:15:11.151680 2043948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:15:11.160617 2043948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:15:11.259025 2043948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:15:11.377661 2043948 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:15:11.377787 2043948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:15:11.381568 2043948 start.go:564] Will wait 60s for crictl version
	I1222 01:15:11.381651 2043948 ssh_runner.go:195] Run: which crictl
	I1222 01:15:11.385203 2043948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1222 01:15:11.423702 2043948 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1222 01:15:11.423848 2043948 ssh_runner.go:195] Run: crio --version
	I1222 01:15:11.474234 2043948 ssh_runner.go:195] Run: crio --version
	I1222 01:15:11.520785 2043948 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1222 01:15:11.523641 2043948 cli_runner.go:164] Run: docker network inspect missing-upgrade-158754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:15:11.539601 2043948 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:15:11.543342 2043948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:15:11.554113 2043948 kubeadm.go:884] updating cluster {Name:missing-upgrade-158754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-158754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:15:11.554239 2043948 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1222 01:15:11.554292 2043948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:15:11.633032 2043948 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:15:11.633052 2043948 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:15:11.633105 2043948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:15:11.681442 2043948 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:15:11.681463 2043948 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:15:11.681471 2043948 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.32.0 crio true true} ...
	I1222 01:15:11.681560 2043948 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-158754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-158754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:15:11.681645 2043948 ssh_runner.go:195] Run: crio config
	I1222 01:15:11.730993 2043948 cni.go:84] Creating CNI manager for ""
	I1222 01:15:11.731018 2043948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:15:11.731036 2043948 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:15:11.731081 2043948 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-158754 NodeName:missing-upgrade-158754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:15:11.731219 2043948 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-158754"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:15:11.731300 2043948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1222 01:15:11.740075 2043948 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:15:11.740192 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:15:11.748844 2043948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:15:11.767261 2043948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:15:11.785127 2043948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 01:15:11.802897 2043948 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:15:11.806331 2043948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:15:11.817130 2043948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:15:11.903520 2043948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:15:11.920811 2043948 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754 for IP: 192.168.85.2
	I1222 01:15:11.920831 2043948 certs.go:195] generating shared ca certs ...
	I1222 01:15:11.920845 2043948 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:15:11.920977 2043948 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:15:11.921025 2043948 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:15:11.921032 2043948 certs.go:257] generating profile certs ...
	I1222 01:15:11.921118 2043948 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/client.key
	I1222 01:15:11.921179 2043948 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/apiserver.key.0b3f80f1
	I1222 01:15:11.921215 2043948 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/proxy-client.key
	I1222 01:15:11.921356 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:15:11.921392 2043948 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:15:11.921400 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:15:11.921435 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:15:11.921462 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:15:11.921486 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:15:11.921563 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:15:11.922112 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:15:11.981230 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:15:12.051413 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:15:12.085628 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:15:12.115168 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1222 01:15:12.139772 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:15:12.164268 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:15:12.187811 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:15:12.210971 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:15:12.235132 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:15:12.262996 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:15:12.286928 2043948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:15:12.305090 2043948 ssh_runner.go:195] Run: openssl version
	I1222 01:15:12.310386 2043948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:15:12.318484 2043948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:15:12.326878 2043948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:15:12.330109 2043948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:15:12.330179 2043948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:15:12.336999 2043948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:15:12.345419 2043948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:15:12.353620 2043948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:15:12.361897 2043948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:15:12.370444 2043948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:15:12.373901 2043948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:15:12.373975 2043948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:15:12.380793 2043948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:15:12.389494 2043948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:15:12.398122 2043948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:15:12.406603 2043948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:15:12.415290 2043948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:15:12.419018 2043948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:15:12.419105 2043948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:15:12.426084 2043948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:15:12.435053 2043948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:15:12.443291 2043948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:15:12.447030 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:15:12.453845 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:15:12.461806 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:15:12.468932 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:15:12.475527 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:15:12.483225 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:15:12.490317 2043948 kubeadm.go:401] StartCluster: {Name:missing-upgrade-158754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-158754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:15:12.490398 2043948 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:15:12.490481 2043948 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:15:12.527990 2043948 cri.go:96] found id: ""
	I1222 01:15:12.528120 2043948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:15:12.537045 2043948 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:15:12.537125 2043948 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:15:12.537188 2043948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:15:12.546569 2043948 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:15:12.547219 2043948 kubeconfig.go:125] found "missing-upgrade-158754" server: "https://192.168.85.2:8443"
	I1222 01:15:12.548050 2043948 kapi.go:59] client config for missing-upgrade-158754: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 01:15:12.548533 2043948 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 01:15:12.548560 2043948 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 01:15:12.548566 2043948 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 01:15:12.548571 2043948 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 01:15:12.548576 2043948 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 01:15:12.548852 2043948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:15:12.558008 2043948 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 01:14:19.743451667 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 01:15:11.793858162 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1222 01:15:12.558069 2043948 kubeadm.go:1161] stopping kube-system containers ...
	I1222 01:15:12.558087 2043948 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1222 01:15:12.558148 2043948 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:15:12.606028 2043948 cri.go:96] found id: ""
	I1222 01:15:12.606138 2043948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 01:15:12.618900 2043948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:15:12.628227 2043948 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:15:12.628249 2043948 kubeadm.go:158] found existing configuration files:
	
	I1222 01:15:12.628309 2043948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:15:12.636984 2043948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:15:12.637087 2043948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:15:12.645476 2043948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:15:12.654096 2043948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:15:12.654160 2043948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:15:12.663150 2043948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:15:12.672121 2043948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:15:12.672191 2043948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:15:12.681103 2043948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:15:12.689395 2043948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:15:12.689461 2043948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:15:12.698370 2043948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:15:12.707277 2043948 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:15:12.754216 2043948 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	W1222 01:15:09.964359 2044197 pod_ready.go:104] pod "etcd-pause-583340" is not "Ready", error: <nil>
	W1222 01:15:11.999593 2044197 pod_ready.go:104] pod "etcd-pause-583340" is not "Ready", error: <nil>
	I1222 01:15:13.463612 2044197 pod_ready.go:94] pod "etcd-pause-583340" is "Ready"
	I1222 01:15:13.463639 2044197 pod_ready.go:86] duration metric: took 5.505552097s for pod "etcd-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:13.466444 2044197 pod_ready.go:83] waiting for pod "kube-apiserver-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:13.474167 2044197 pod_ready.go:94] pod "kube-apiserver-pause-583340" is "Ready"
	I1222 01:15:13.474192 2044197 pod_ready.go:86] duration metric: took 7.726459ms for pod "kube-apiserver-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:13.477207 2044197 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:14.983943 2044197 pod_ready.go:94] pod "kube-controller-manager-pause-583340" is "Ready"
	I1222 01:15:14.983972 2044197 pod_ready.go:86] duration metric: took 1.506743191s for pod "kube-controller-manager-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:14.986957 2044197 pod_ready.go:83] waiting for pod "kube-proxy-wtzct" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:14.992370 2044197 pod_ready.go:94] pod "kube-proxy-wtzct" is "Ready"
	I1222 01:15:14.992394 2044197 pod_ready.go:86] duration metric: took 5.413734ms for pod "kube-proxy-wtzct" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:15.061761 2044197 pod_ready.go:83] waiting for pod "kube-scheduler-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:15.460868 2044197 pod_ready.go:94] pod "kube-scheduler-pause-583340" is "Ready"
	I1222 01:15:15.460895 2044197 pod_ready.go:86] duration metric: took 399.107562ms for pod "kube-scheduler-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:15.460907 2044197 pod_ready.go:40] duration metric: took 11.016026719s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:15:15.540908 2044197 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:15:15.544271 2044197 out.go:179] * Done! kubectl is now configured to use "pause-583340" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.708635244Z" level=info msg="Started container" PID=2445 containerID=a5bf8f4c8d241d96ef93e334dc05a1cdf4eb294ea5b8916776586971f60054be description=kube-system/kindnet-tw999/kindnet-cni id=28adf616-3110-424a-912d-5a8aea19e95b name=/runtime.v1.RuntimeService/StartContainer sandboxID=097e191e3b64218bf698323c3402eefb1446e5bfbcd3dcfc2d1942db6bf65cb4
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.729643176Z" level=info msg="Created container 2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e: kube-system/kube-scheduler-pause-583340/kube-scheduler" id=a349b478-ae85-4786-9d64-4e1ed9bf8039 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.736475992Z" level=info msg="Created container 4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc: kube-system/kube-controller-manager-pause-583340/kube-controller-manager" id=d40122d6-b4ef-4537-b663-0d4a5bb116f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.737308402Z" level=info msg="Starting container: 4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc" id=0389e9d3-8643-4280-bbea-53ffa0b72a8d name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.740570768Z" level=info msg="Starting container: 2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e" id=32c6d50d-4bf6-41dc-b1e2-687572127c8f name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.743207831Z" level=info msg="Started container" PID=2455 containerID=4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc description=kube-system/kube-controller-manager-pause-583340/kube-controller-manager id=0389e9d3-8643-4280-bbea-53ffa0b72a8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb611e0e63292b12038c9cd74c1a6760020dacfdcf294dee5cd4848819e89636
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.744756428Z" level=info msg="Started container" PID=2449 containerID=2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e description=kube-system/kube-scheduler-pause-583340/kube-scheduler id=32c6d50d-4bf6-41dc-b1e2-687572127c8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=f557e604e139cf8c956e348cebad280a5cac487288bf2a50a38fe6f97c716451
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.750117298Z" level=info msg="Created container 0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814: kube-system/coredns-66bc5c9577-q9vtv/coredns" id=e868291f-ec3d-4f0e-8174-7796e249873d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.751654194Z" level=info msg="Starting container: 0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814" id=f64cbcc3-c3f5-4ffd-ac34-2223a44463ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.753435654Z" level=info msg="Started container" PID=2461 containerID=0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814 description=kube-system/coredns-66bc5c9577-q9vtv/coredns id=f64cbcc3-c3f5-4ffd-ac34-2223a44463ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bdb5f67b60da5f174b107a4c9c8743d38b6eafcd00913306038af8ee3ee4c23
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.12293679Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.127151192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.127323256Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.127409785Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.131352975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.131388576Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.131413667Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.135800068Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.135837794Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.135860636Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.139206331Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.139243212Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.139274923Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.142578937Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.142633336Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0acd4974cdd60       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     23 seconds ago       Running             coredns                   1                   4bdb5f67b60da       coredns-66bc5c9577-q9vtv               kube-system
	4e4ea80f48321       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     23 seconds ago       Running             kube-controller-manager   1                   cb611e0e63292       kube-controller-manager-pause-583340   kube-system
	2ca331b193d96       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     23 seconds ago       Running             kube-scheduler            1                   f557e604e139c       kube-scheduler-pause-583340            kube-system
	a5bf8f4c8d241       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     23 seconds ago       Running             kindnet-cni               1                   097e191e3b642       kindnet-tw999                          kube-system
	360534ca8ae0d       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     23 seconds ago       Running             kube-proxy                1                   b0b93a08eadb8       kube-proxy-wtzct                       kube-system
	53f57d7cd6467       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     23 seconds ago       Running             kube-apiserver            1                   10f5859055244       kube-apiserver-pause-583340            kube-system
	d986846af4e9c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     23 seconds ago       Running             etcd                      1                   c41b0fd246654       etcd-pause-583340                      kube-system
	5a5697ce35641       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     37 seconds ago       Exited              coredns                   0                   4bdb5f67b60da       coredns-66bc5c9577-q9vtv               kube-system
	d1433bb270cb8       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   48 seconds ago       Exited              kindnet-cni               0                   097e191e3b642       kindnet-tw999                          kube-system
	e8bde2a9b810e       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     50 seconds ago       Exited              kube-proxy                0                   b0b93a08eadb8       kube-proxy-wtzct                       kube-system
	e9c4b91f92f67       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     About a minute ago   Exited              kube-controller-manager   0                   cb611e0e63292       kube-controller-manager-pause-583340   kube-system
	75d17f9897ffa       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     About a minute ago   Exited              kube-scheduler            0                   f557e604e139c       kube-scheduler-pause-583340            kube-system
	1704cd445120e       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     About a minute ago   Exited              kube-apiserver            0                   10f5859055244       kube-apiserver-pause-583340            kube-system
	bc34fa5a980ae       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     About a minute ago   Exited              etcd                      0                   c41b0fd246654       etcd-pause-583340                      kube-system
	
	
	==> coredns [0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43462 - 37570 "HINFO IN 5849911280662542970.4270756862494641719. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034754661s
	
	
	==> coredns [5a5697ce35641b6523b23f1835ed38471d05acae1a2921520161c873e721fabb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37818 - 17360 "HINFO IN 8738458638431759642.221725157045792913. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022546748s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-583340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-583340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=pause-583340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_14_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:14:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-583340
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:15:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:15:13 +0000   Mon, 22 Dec 2025 01:14:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:15:13 +0000   Mon, 22 Dec 2025 01:14:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:15:13 +0000   Mon, 22 Dec 2025 01:14:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:15:13 +0000   Mon, 22 Dec 2025 01:14:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-583340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                83a8f616-4c79-4941-ae7f-7fd350bcc280
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q9vtv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     52s
	  kube-system                 etcd-pause-583340                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         57s
	  kube-system                 kindnet-tw999                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      52s
	  kube-system                 kube-apiserver-pause-583340             250m (12%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-controller-manager-pause-583340    200m (10%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-proxy-wtzct                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-scheduler-pause-583340             100m (5%)     0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 50s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node pause-583340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node pause-583340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node pause-583340 status is now: NodeHasSufficientPID
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s                kubelet          Node pause-583340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s                kubelet          Node pause-583340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s                kubelet          Node pause-583340 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node pause-583340 event: Registered Node pause-583340 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-583340 status is now: NodeReady
	  Normal   RegisteredNode           14s                node-controller  Node pause-583340 event: Registered Node pause-583340 in Controller
	
	
	==> dmesg <==
	[Dec22 00:51] overlayfs: idmapped layers are currently not supported
	[Dec22 00:53] overlayfs: idmapped layers are currently not supported
	[Dec22 00:54] overlayfs: idmapped layers are currently not supported
	[ +51.911977] overlayfs: idmapped layers are currently not supported
	[  +3.821564] overlayfs: idmapped layers are currently not supported
	[Dec22 00:55] overlayfs: idmapped layers are currently not supported
	[Dec22 00:56] overlayfs: idmapped layers are currently not supported
	[Dec22 00:57] overlayfs: idmapped layers are currently not supported
	[  +3.314214] overlayfs: idmapped layers are currently not supported
	[Dec22 00:58] overlayfs: idmapped layers are currently not supported
	[Dec22 00:59] overlayfs: idmapped layers are currently not supported
	[Dec22 01:00] overlayfs: idmapped layers are currently not supported
	[Dec22 01:04] overlayfs: idmapped layers are currently not supported
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bc34fa5a980aea7a90312cc2cd3c62d2994e3be1c3b9e8cb44a2834dca4c396a] <==
	{"level":"warn","ts":"2025-12-22T01:14:17.849647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.867745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.893594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.912949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.934207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.948398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:18.114607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38580","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-22T01:14:45.634525Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-22T01:14:45.634588Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-583340","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-22T01:14:45.634679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-22T01:14:45.634748Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-22T01:14:45.789079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-22T01:14:45.789146Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-22T01:14:45.789220Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-22T01:14:45.789236Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-22T01:14:45.789229Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-22T01:14:45.789340Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-22T01:14:45.789376Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-22T01:14:45.789462Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-22T01:14:45.789481Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-22T01:14:45.789489Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-22T01:14:45.792530Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-22T01:14:45.792618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-22T01:14:45.792646Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-22T01:14:45.792666Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-583340","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [d986846af4e9c4f8639a53c9046584c7dabd234f69aad5a9cf63520365ad4dca] <==
	{"level":"warn","ts":"2025-12-22T01:14:59.318016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.419829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.454331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.471207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.513976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.579925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.591897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.638648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.671602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.701840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.725582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.759577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.792266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.829290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.863993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.913629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.935252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.983042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.995715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.023518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.073639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.158207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.188640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.273257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.619324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:15:19 up 1 day,  7:57,  0 user,  load average: 3.24, 2.16, 1.91
	Linux pause-583340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a5bf8f4c8d241d96ef93e334dc05a1cdf4eb294ea5b8916776586971f60054be] <==
	I1222 01:14:55.835830       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:14:55.837652       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1222 01:14:55.837894       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:14:55.840172       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:14:55.840259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:14:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:14:56.122611       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:14:56.122710       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:14:56.122760       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:14:56.125515       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1222 01:15:02.523134       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:15:02.523215       1 metrics.go:72] Registering metrics
	I1222 01:15:02.523302       1 controller.go:711] "Syncing nftables rules"
	I1222 01:15:06.122512       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1222 01:15:06.122574       1 main.go:301] handling current node
	I1222 01:15:16.122711       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1222 01:15:16.122744       1 main.go:301] handling current node
	
	
	==> kindnet [d1433bb270cb8215342f9ee03eba858ebc0bff54f87693fec5575b4655d4dfa0] <==
	I1222 01:14:30.840641       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:14:30.921567       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1222 01:14:30.921766       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:14:30.921807       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:14:30.921843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:14:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:14:31.039532       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:14:31.045911       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:14:31.046250       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:14:31.046415       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1222 01:14:31.158116       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:14:31.221367       1 metrics.go:72] Registering metrics
	I1222 01:14:31.222242       1 controller.go:711] "Syncing nftables rules"
	I1222 01:14:41.044827       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1222 01:14:41.046112       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1704cd445120eba334cf13f2046f9af5f8dbc91bd484b46f2b98f576b2c49ae2] <==
	W1222 01:14:45.650335       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650382       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650456       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650538       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650583       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650661       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650702       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650765       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650813       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650855       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.652589       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.652716       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.652802       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.652931       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653012       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653061       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653103       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653185       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653250       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653304       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653394       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653625       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653695       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653755       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [53f57d7cd64676b980d9c9d9c9843515e2f0726aa6640a5bdb474959c5074223] <==
	I1222 01:15:02.427025       1 policy_source.go:240] refreshing policies
	I1222 01:15:02.435516       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:15:02.461085       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1222 01:15:02.461407       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1222 01:15:02.461565       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1222 01:15:02.462418       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1222 01:15:02.462470       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1222 01:15:02.465146       1 aggregator.go:171] initial CRD sync complete...
	I1222 01:15:02.465200       1 autoregister_controller.go:144] Starting autoregister controller
	I1222 01:15:02.465240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1222 01:15:02.465268       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:15:02.477190       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1222 01:15:02.481494       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1222 01:15:02.481814       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1222 01:15:02.481879       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1222 01:15:02.485928       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:15:02.494224       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1222 01:15:02.494294       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1222 01:15:02.507046       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1222 01:15:03.065043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:15:04.351355       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1222 01:15:05.774220       1 controller.go:667] quota admission added evaluator for: endpoints
	I1222 01:15:05.974008       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1222 01:15:06.023746       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:15:06.127082       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc] <==
	I1222 01:15:05.722220       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1222 01:15:05.722310       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1222 01:15:05.722380       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-583340"
	I1222 01:15:05.722428       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1222 01:15:05.725024       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1222 01:15:05.726195       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:15:05.727592       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:15:05.729803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1222 01:15:05.730999       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1222 01:15:05.733228       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1222 01:15:05.734419       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1222 01:15:05.754781       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:15:05.757958       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1222 01:15:05.765447       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1222 01:15:05.765538       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1222 01:15:05.765622       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1222 01:15:05.765456       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:15:05.765962       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1222 01:15:05.766045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1222 01:15:05.766177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1222 01:15:05.767380       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:15:05.769495       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1222 01:15:05.771014       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1222 01:15:05.785923       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1222 01:15:05.787125       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea] <==
	I1222 01:14:26.292969       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1222 01:14:26.298629       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1222 01:14:26.302847       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:14:26.304773       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-583340" podCIDRs=["10.244.0.0/24"]
	I1222 01:14:26.308385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1222 01:14:26.308500       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1222 01:14:26.310859       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1222 01:14:26.311112       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1222 01:14:26.311522       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:14:26.313173       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1222 01:14:26.313240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1222 01:14:26.313253       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1222 01:14:26.315191       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1222 01:14:26.315273       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1222 01:14:26.320500       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1222 01:14:26.320636       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1222 01:14:26.321900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:14:26.321975       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1222 01:14:26.322005       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1222 01:14:26.327935       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1222 01:14:26.330174       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:14:26.341137       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:14:26.359368       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1222 01:14:26.363277       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1222 01:14:26.379134       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [360534ca8ae0ddd9193a87336eb1e700824be47538456a2172361591d20fdd15] <==
	I1222 01:14:55.803288       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:14:56.794726       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:15:02.496369       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:15:02.499936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1222 01:15:02.500919       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:15:02.612372       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:15:02.612434       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:15:02.617961       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:15:02.618264       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:15:02.618278       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:15:02.619774       1 config.go:200] "Starting service config controller"
	I1222 01:15:02.619784       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:15:02.619800       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:15:02.619804       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:15:02.619815       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:15:02.619819       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:15:02.623930       1 config.go:309] "Starting node config controller"
	I1222 01:15:02.624010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:15:02.624019       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:15:02.720250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1222 01:15:02.729238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:15:02.729276       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e8bde2a9b810e4304e64db22e4b461d56e70c2f35c0012314c20f357ef18c242] <==
	I1222 01:14:28.172045       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:14:28.265487       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:14:28.366267       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:14:28.366336       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1222 01:14:28.366422       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:14:28.434008       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:14:28.434218       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:14:28.444496       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:14:28.445317       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:14:28.445386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:14:28.484946       1 config.go:200] "Starting service config controller"
	I1222 01:14:28.484964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:14:28.484983       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:14:28.484987       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:14:28.484998       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:14:28.485001       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:14:28.485592       1 config.go:309] "Starting node config controller"
	I1222 01:14:28.485601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:14:28.485608       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:14:28.586176       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:14:28.586259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1222 01:14:28.586461       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e] <==
	I1222 01:15:01.159388       1 serving.go:386] Generated self-signed cert in-memory
	I1222 01:15:03.514366       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1222 01:15:03.514483       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:15:03.520973       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1222 01:15:03.521019       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1222 01:15:03.521049       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:15:03.521057       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:15:03.521074       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1222 01:15:03.521086       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1222 01:15:03.521267       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 01:15:03.521410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1222 01:15:03.622607       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1222 01:15:03.622642       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1222 01:15:03.622659       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9] <==
	E1222 01:14:19.424258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1222 01:14:19.424373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1222 01:14:19.424490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1222 01:14:19.424599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 01:14:19.424748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1222 01:14:19.424864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 01:14:19.425035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1222 01:14:19.425152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1222 01:14:20.261552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1222 01:14:20.325551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1222 01:14:20.355679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 01:14:20.385891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1222 01:14:20.394577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1222 01:14:20.463707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1222 01:14:20.481875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1222 01:14:20.527099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1222 01:14:20.527244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 01:14:20.585540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1222 01:14:23.460580       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:14:45.629905       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1222 01:14:45.629936       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1222 01:14:45.629954       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1222 01:14:45.629982       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:14:45.630149       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1222 01:14:45.630164       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 22 01:14:55 pause-583340 kubelet[1336]: I1222 01:14:55.579709    1336 scope.go:117] "RemoveContainer" containerID="e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.580282    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-tw999\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1f4faa7e-1c28-47ad-997f-e0f221fb7da9" pod="kube-system/kindnet-tw999"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.580610    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtzct\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7b1b5156-bdf0-4050-bbde-959b400ff9c7" pod="kube-system/kube-proxy-wtzct"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581000    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q9vtv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c0f72789-c82a-4c15-9a16-b5fe1719ee81" pod="kube-system/coredns-66bc5c9577-q9vtv"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581237    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c678e99cd3083231a93cbd7ce134239" pod="kube-system/kube-scheduler-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581473    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7e2543012fea4ce03318a1c0f6aaee0f" pod="kube-system/etcd-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581701    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="14d35a07742096b90a0b5fc6b9c36b99" pod="kube-system/kube-apiserver-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581910    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="34d854e664c3ec1174aaf6b825d460d3" pod="kube-system/kube-controller-manager-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: I1222 01:14:55.584161    1336 scope.go:117] "RemoveContainer" containerID="75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.584576    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7e2543012fea4ce03318a1c0f6aaee0f" pod="kube-system/etcd-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.584752    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="14d35a07742096b90a0b5fc6b9c36b99" pod="kube-system/kube-apiserver-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.584912    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="34d854e664c3ec1174aaf6b825d460d3" pod="kube-system/kube-controller-manager-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.585065    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-tw999\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1f4faa7e-1c28-47ad-997f-e0f221fb7da9" pod="kube-system/kindnet-tw999"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.588321    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtzct\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7b1b5156-bdf0-4050-bbde-959b400ff9c7" pod="kube-system/kube-proxy-wtzct"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.588582    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q9vtv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c0f72789-c82a-4c15-9a16-b5fe1719ee81" pod="kube-system/coredns-66bc5c9577-q9vtv"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.588809    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c678e99cd3083231a93cbd7ce134239" pod="kube-system/kube-scheduler-pause-583340"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.271972    1336 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-583340\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.272206    1336 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-583340\" is forbidden: User \"system:node:pause-583340\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" podUID="34d854e664c3ec1174aaf6b825d460d3" pod="kube-system/kube-controller-manager-pause-583340"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.323713    1336 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-tw999\" is forbidden: User \"system:node:pause-583340\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" podUID="1f4faa7e-1c28-47ad-997f-e0f221fb7da9" pod="kube-system/kindnet-tw999"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.354372    1336 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-wtzct\" is forbidden: User \"system:node:pause-583340\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" podUID="7b1b5156-bdf0-4050-bbde-959b400ff9c7" pod="kube-system/kube-proxy-wtzct"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.405304    1336 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-q9vtv\" is forbidden: User \"system:node:pause-583340\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" podUID="c0f72789-c82a-4c15-9a16-b5fe1719ee81" pod="kube-system/coredns-66bc5c9577-q9vtv"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: W1222 01:15:02.569355    1336 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 22 01:15:16 pause-583340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 22 01:15:16 pause-583340 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 22 01:15:16 pause-583340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-583340 -n pause-583340
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-583340 -n pause-583340: exit status 2 (487.875654ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-583340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-583340
helpers_test.go:244: (dbg) docker inspect pause-583340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c",
	        "Created": "2025-12-22T01:13:46.479251362Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2039493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:13:49.217860819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c/hostname",
	        "HostsPath": "/var/lib/docker/containers/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c/hosts",
	        "LogPath": "/var/lib/docker/containers/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c/4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c-json.log",
	        "Name": "/pause-583340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-583340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-583340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4374be10b56d7e7bc81a26ca3494b59126a9495ee1586a034076c21f6768977c",
	                "LowerDir": "/var/lib/docker/overlay2/a74ce76b035d0d66089b6abb479f9ea7df1f8425509ec5288babd3897e6bfc92-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a74ce76b035d0d66089b6abb479f9ea7df1f8425509ec5288babd3897e6bfc92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a74ce76b035d0d66089b6abb479f9ea7df1f8425509ec5288babd3897e6bfc92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a74ce76b035d0d66089b6abb479f9ea7df1f8425509ec5288babd3897e6bfc92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-583340",
	                "Source": "/var/lib/docker/volumes/pause-583340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-583340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-583340",
	                "name.minikube.sigs.k8s.io": "pause-583340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac3ebb6fcf5970862acf821fc7fcc1b0bee98f13b17797f58852a0a6460e910a",
	            "SandboxKey": "/var/run/docker/netns/ac3ebb6fcf59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38656"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38657"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38660"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38658"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38659"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-583340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:3b:da:b5:af:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1f7dfb485d3424cdc12779a2f73c16ee78d9ce44cbbe257bb9cc718d35f48194",
	                    "EndpointID": "9823ea3de2b1346481ab23ff825c8a98c5838e6937aab36574bec7d7eebb2631",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-583340",
	                        "4374be10b56d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-583340 -n pause-583340
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-583340 -n pause-583340: exit status 2 (568.015005ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-583340 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-583340 logs -n 25: (2.18341894s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-171197                                                                                         │ multinode-171197            │ jenkins │ v1.37.0 │ 22 Dec 25 01:10 UTC │                     │
	│ start   │ -p multinode-171197-m02 --driver=docker  --container-runtime=crio                                                │ multinode-171197-m02        │ jenkins │ v1.37.0 │ 22 Dec 25 01:10 UTC │                     │
	│ start   │ -p multinode-171197-m03 --driver=docker  --container-runtime=crio                                                │ multinode-171197-m03        │ jenkins │ v1.37.0 │ 22 Dec 25 01:10 UTC │ 22 Dec 25 01:11 UTC │
	│ node    │ add -p multinode-171197                                                                                          │ multinode-171197            │ jenkins │ v1.37.0 │ 22 Dec 25 01:11 UTC │                     │
	│ delete  │ -p multinode-171197-m03                                                                                          │ multinode-171197-m03        │ jenkins │ v1.37.0 │ 22 Dec 25 01:11 UTC │ 22 Dec 25 01:11 UTC │
	│ delete  │ -p multinode-171197                                                                                              │ multinode-171197            │ jenkins │ v1.37.0 │ 22 Dec 25 01:11 UTC │ 22 Dec 25 01:11 UTC │
	│ start   │ -p scheduled-stop-389645 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:11 UTC │ 22 Dec 25 01:12 UTC │
	│ stop    │ -p scheduled-stop-389645 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --cancel-scheduled                                                                      │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │ 22 Dec 25 01:12 UTC │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │                     │
	│ stop    │ -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:12 UTC │ 22 Dec 25 01:12 UTC │
	│ delete  │ -p scheduled-stop-389645                                                                                         │ scheduled-stop-389645       │ jenkins │ v1.37.0 │ 22 Dec 25 01:13 UTC │ 22 Dec 25 01:13 UTC │
	│ start   │ -p insufficient-storage-993583 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-993583 │ jenkins │ v1.37.0 │ 22 Dec 25 01:13 UTC │                     │
	│ delete  │ -p insufficient-storage-993583                                                                                   │ insufficient-storage-993583 │ jenkins │ v1.37.0 │ 22 Dec 25 01:13 UTC │ 22 Dec 25 01:13 UTC │
	│ start   │ -p pause-583340 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-583340                │ jenkins │ v1.37.0 │ 22 Dec 25 01:13 UTC │ 22 Dec 25 01:14 UTC │
	│ start   │ -p missing-upgrade-158754 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-158754      │ jenkins │ v1.35.0 │ 22 Dec 25 01:13 UTC │ 22 Dec 25 01:14 UTC │
	│ start   │ -p missing-upgrade-158754 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-158754      │ jenkins │ v1.37.0 │ 22 Dec 25 01:14 UTC │                     │
	│ start   │ -p pause-583340 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-583340                │ jenkins │ v1.37.0 │ 22 Dec 25 01:14 UTC │ 22 Dec 25 01:15 UTC │
	│ pause   │ -p pause-583340 --alsologtostderr -v=5                                                                           │ pause-583340                │ jenkins │ v1.37.0 │ 22 Dec 25 01:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:14:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:14:44.331609 2044197 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:14:44.331737 2044197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:14:44.331754 2044197 out.go:374] Setting ErrFile to fd 2...
	I1222 01:14:44.331766 2044197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:14:44.332137 2044197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:14:44.332579 2044197 out.go:368] Setting JSON to false
	I1222 01:14:44.333593 2044197 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115034,"bootTime":1766251051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:14:44.333690 2044197 start.go:143] virtualization:  
	I1222 01:14:44.336645 2044197 out.go:179] * [pause-583340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:14:44.338715 2044197 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:14:44.339561 2044197 notify.go:221] Checking for updates...
	I1222 01:14:44.344655 2044197 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:14:44.347620 2044197 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:14:44.350499 2044197 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:14:44.353460 2044197 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:14:44.356289 2044197 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:14:44.359706 2044197 config.go:182] Loaded profile config "pause-583340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:14:44.360885 2044197 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:14:44.384979 2044197 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:14:44.385112 2044197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:14:44.463922 2044197 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:14:44.454216519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:14:44.464040 2044197 docker.go:319] overlay module found
	I1222 01:14:44.467287 2044197 out.go:179] * Using the docker driver based on existing profile
	I1222 01:14:44.470122 2044197 start.go:309] selected driver: docker
	I1222 01:14:44.470143 2044197 start.go:928] validating driver "docker" against &{Name:pause-583340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:14:44.470297 2044197 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:14:44.470397 2044197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:14:44.531371 2044197 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:14:44.52243327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:14:44.531775 2044197 cni.go:84] Creating CNI manager for ""
	I1222 01:14:44.531838 2044197 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:14:44.531892 2044197 start.go:353] cluster config:
	{Name:pause-583340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:14:44.535156 2044197 out.go:179] * Starting "pause-583340" primary control-plane node in "pause-583340" cluster
	I1222 01:14:44.538441 2044197 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:14:44.541312 2044197 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:14:44.544176 2044197 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:14:44.544226 2044197 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:14:44.544256 2044197 cache.go:65] Caching tarball of preloaded images
	I1222 01:14:44.544262 2044197 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:14:44.544345 2044197 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:14:44.544355 2044197 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:14:44.544478 2044197 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/config.json ...
	I1222 01:14:44.564784 2044197 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:14:44.564808 2044197 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:14:44.564823 2044197 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:14:44.564852 2044197 start.go:360] acquireMachinesLock for pause-583340: {Name:mk537b6ea77b948b0f7bd76cad679f7f864f9ec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:14:44.564911 2044197 start.go:364] duration metric: took 35.912µs to acquireMachinesLock for "pause-583340"
	I1222 01:14:44.564935 2044197 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:14:44.564945 2044197 fix.go:54] fixHost starting: 
	I1222 01:14:44.565201 2044197 cli_runner.go:164] Run: docker container inspect pause-583340 --format={{.State.Status}}
	I1222 01:14:44.581907 2044197 fix.go:112] recreateIfNeeded on pause-583340: state=Running err=<nil>
	W1222 01:14:44.581950 2044197 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:14:44.202800 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:14:44.224580 2043948 cli_runner.go:211] docker container inspect missing-upgrade-158754 --format={{.State.Status}} returned with exit code 1
	I1222 01:14:44.224643 2043948 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:44.224652 2043948 oci.go:673] temporary error: container missing-upgrade-158754 status is  but expect it to be exited
	I1222 01:14:45.739995 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:14:45.765162 2043948 cli_runner.go:211] docker container inspect missing-upgrade-158754 --format={{.State.Status}} returned with exit code 1
	I1222 01:14:45.765230 2043948 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:45.765242 2043948 oci.go:673] temporary error: container missing-upgrade-158754 status is  but expect it to be exited
	I1222 01:14:44.585101 2044197 out.go:252] * Updating the running docker "pause-583340" container ...
	I1222 01:14:44.585137 2044197 machine.go:94] provisionDockerMachine start ...
	I1222 01:14:44.585220 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:44.602379 2044197 main.go:144] libmachine: Using SSH client type: native
	I1222 01:14:44.602710 2044197 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38656 <nil> <nil>}
	I1222 01:14:44.602726 2044197 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:14:44.745976 2044197 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-583340
	
	I1222 01:14:44.746003 2044197 ubuntu.go:182] provisioning hostname "pause-583340"
	I1222 01:14:44.746077 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:44.763793 2044197 main.go:144] libmachine: Using SSH client type: native
	I1222 01:14:44.764098 2044197 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38656 <nil> <nil>}
	I1222 01:14:44.764109 2044197 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-583340 && echo "pause-583340" | sudo tee /etc/hostname
	I1222 01:14:44.916630 2044197 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-583340
	
	I1222 01:14:44.916714 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:44.934540 2044197 main.go:144] libmachine: Using SSH client type: native
	I1222 01:14:44.934853 2044197 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38656 <nil> <nil>}
	I1222 01:14:44.934877 2044197 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-583340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-583340/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-583340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:14:45.103878 2044197 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:14:45.103907 2044197 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:14:45.103949 2044197 ubuntu.go:190] setting up certificates
	I1222 01:14:45.103971 2044197 provision.go:84] configureAuth start
	I1222 01:14:45.104159 2044197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-583340
	I1222 01:14:45.125837 2044197 provision.go:143] copyHostCerts
	I1222 01:14:45.125921 2044197 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:14:45.125948 2044197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:14:45.126036 2044197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:14:45.126174 2044197 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:14:45.126187 2044197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:14:45.126220 2044197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:14:45.126300 2044197 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:14:45.126311 2044197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:14:45.126339 2044197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:14:45.126408 2044197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.pause-583340 san=[127.0.0.1 192.168.76.2 localhost minikube pause-583340]
	I1222 01:14:45.270053 2044197 provision.go:177] copyRemoteCerts
	I1222 01:14:45.270220 2044197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:14:45.270388 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:45.288361 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:45.395036 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:14:45.416230 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1222 01:14:45.435176 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:14:45.455330 2044197 provision.go:87] duration metric: took 351.345264ms to configureAuth
	I1222 01:14:45.455370 2044197 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:14:45.455596 2044197 config.go:182] Loaded profile config "pause-583340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:14:45.455712 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:45.473412 2044197 main.go:144] libmachine: Using SSH client type: native
	I1222 01:14:45.473807 2044197 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38656 <nil> <nil>}
	I1222 01:14:45.473829 2044197 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:14:51.020750 2044197 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:14:51.020780 2044197 machine.go:97] duration metric: took 6.435632838s to provisionDockerMachine
	I1222 01:14:51.020792 2044197 start.go:293] postStartSetup for "pause-583340" (driver="docker")
	I1222 01:14:51.020803 2044197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:14:51.020867 2044197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:14:51.020925 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:51.042068 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:51.137835 2044197 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:14:51.141516 2044197 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:14:51.141544 2044197 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:14:51.141555 2044197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:14:51.141614 2044197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:14:51.141695 2044197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:14:51.142445 2044197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:14:51.150356 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:14:51.168412 2044197 start.go:296] duration metric: took 147.604392ms for postStartSetup
	I1222 01:14:51.168511 2044197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:14:51.168561 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:51.185653 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:51.279201 2044197 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:14:51.284006 2044197 fix.go:56] duration metric: took 6.719054259s for fixHost
	I1222 01:14:51.284030 2044197 start.go:83] releasing machines lock for "pause-583340", held for 6.719106523s
	I1222 01:14:51.284110 2044197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-583340
	I1222 01:14:51.302487 2044197 ssh_runner.go:195] Run: cat /version.json
	I1222 01:14:51.302503 2044197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:14:51.302548 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:51.302568 2044197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-583340
	I1222 01:14:51.322472 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:51.329453 2044197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38656 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/pause-583340/id_rsa Username:docker}
	I1222 01:14:51.545155 2044197 ssh_runner.go:195] Run: systemctl --version
	I1222 01:14:51.551335 2044197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:14:51.644905 2044197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:14:51.649169 2044197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:14:51.649256 2044197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:14:51.659511 2044197 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:14:51.659535 2044197 start.go:496] detecting cgroup driver to use...
	I1222 01:14:51.659584 2044197 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:14:51.659656 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:14:51.674804 2044197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:14:51.688005 2044197 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:14:51.688100 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:14:51.704038 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:14:51.717392 2044197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:14:51.857869 2044197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:14:51.992608 2044197 docker.go:234] disabling docker service ...
	I1222 01:14:51.992713 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:14:52.009176 2044197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:14:52.023622 2044197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:14:52.155267 2044197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:14:52.295268 2044197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:14:52.309117 2044197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:14:52.323765 2044197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:14:52.323876 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.333062 2044197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:14:52.333154 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.343018 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.351788 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.360401 2044197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:14:52.368905 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.377743 2044197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.386237 2044197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:14:52.395237 2044197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:14:52.403890 2044197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:14:52.410975 2044197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:14:52.551499 2044197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:14:52.762691 2044197 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:14:52.762761 2044197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:14:52.766649 2044197 start.go:564] Will wait 60s for crictl version
	I1222 01:14:52.766715 2044197 ssh_runner.go:195] Run: which crictl
	I1222 01:14:52.770409 2044197 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:14:52.800742 2044197 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:14:52.800833 2044197 ssh_runner.go:195] Run: crio --version
	I1222 01:14:52.830111 2044197 ssh_runner.go:195] Run: crio --version
	I1222 01:14:52.869084 2044197 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:14:52.872094 2044197 cli_runner.go:164] Run: docker network inspect pause-583340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:14:52.888809 2044197 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:14:52.892995 2044197 kubeadm.go:884] updating cluster {Name:pause-583340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:14:52.893128 2044197 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:14:52.893192 2044197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:14:52.926955 2044197 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:14:52.926980 2044197 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:14:52.927035 2044197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:14:52.955434 2044197 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:14:52.955456 2044197 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:14:52.955470 2044197 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1222 01:14:52.955568 2044197 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-583340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:14:52.955646 2044197 ssh_runner.go:195] Run: crio config
	I1222 01:14:53.027776 2044197 cni.go:84] Creating CNI manager for ""
	I1222 01:14:53.027800 2044197 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:14:53.027820 2044197 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:14:53.027842 2044197 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-583340 NodeName:pause-583340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:14:53.027975 2044197 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-583340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:14:53.028051 2044197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:14:53.036990 2044197 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:14:53.037057 2044197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:14:53.044619 2044197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1222 01:14:53.057220 2044197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:14:53.069928 2044197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1222 01:14:53.084406 2044197 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:14:53.088525 2044197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:14:53.241441 2044197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:14:53.255295 2044197 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340 for IP: 192.168.76.2
	I1222 01:14:53.255315 2044197 certs.go:195] generating shared ca certs ...
	I1222 01:14:53.255331 2044197 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:14:53.255493 2044197 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:14:53.255544 2044197 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:14:53.255557 2044197 certs.go:257] generating profile certs ...
	I1222 01:14:53.255646 2044197 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.key
	I1222 01:14:53.256496 2044197 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/apiserver.key.edbfcaf6
	I1222 01:14:53.257315 2044197 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/proxy-client.key
	I1222 01:14:53.257540 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:14:53.257585 2044197 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:14:53.257598 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:14:53.257623 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:14:53.257651 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:14:53.257682 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:14:53.257735 2044197 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:14:53.258370 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:14:53.276899 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:14:53.295039 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:14:53.313782 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:14:53.331286 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1222 01:14:53.348294 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:14:53.364993 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:14:53.385049 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:14:53.403281 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:14:53.419705 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:14:53.436243 2044197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:14:53.452529 2044197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:14:53.464611 2044197 ssh_runner.go:195] Run: openssl version
	I1222 01:14:53.474407 2044197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:14:53.482713 2044197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:14:53.490666 2044197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:14:53.495082 2044197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:14:53.495145 2044197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:14:53.537910 2044197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:14:53.545378 2044197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:14:53.553189 2044197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:14:53.561350 2044197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:14:53.565209 2044197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:14:53.565294 2044197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:14:53.609663 2044197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:14:53.617158 2044197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:14:53.624509 2044197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:14:53.631460 2044197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:14:53.634987 2044197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:14:53.635057 2044197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:14:53.680484 2044197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:14:53.688094 2044197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:14:53.692652 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:14:53.738660 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:14:53.784479 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:14:53.830307 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:14:53.873293 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:14:53.914826 2044197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:14:53.955907 2044197 kubeadm.go:401] StartCluster: {Name:pause-583340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-583340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:14:53.956029 2044197 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:14:53.956122 2044197 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:14:53.982311 2044197 cri.go:96] found id: "5a5697ce35641b6523b23f1835ed38471d05acae1a2921520161c873e721fabb"
	I1222 01:14:53.982334 2044197 cri.go:96] found id: "d1433bb270cb8215342f9ee03eba858ebc0bff54f87693fec5575b4655d4dfa0"
	I1222 01:14:53.982339 2044197 cri.go:96] found id: "e8bde2a9b810e4304e64db22e4b461d56e70c2f35c0012314c20f357ef18c242"
	I1222 01:14:53.982343 2044197 cri.go:96] found id: "e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea"
	I1222 01:14:53.982346 2044197 cri.go:96] found id: "75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9"
	I1222 01:14:53.982349 2044197 cri.go:96] found id: "1704cd445120eba334cf13f2046f9af5f8dbc91bd484b46f2b98f576b2c49ae2"
	I1222 01:14:53.982352 2044197 cri.go:96] found id: "bc34fa5a980aea7a90312cc2cd3c62d2994e3be1c3b9e8cb44a2834dca4c396a"
	I1222 01:14:53.982355 2044197 cri.go:96] found id: ""
	I1222 01:14:53.982421 2044197 ssh_runner.go:195] Run: sudo runc list -f json
	W1222 01:14:54.010784 2044197 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:14:53Z" level=error msg="open /run/runc: no such file or directory"
	I1222 01:14:54.010901 2044197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:14:54.019779 2044197 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:14:54.019800 2044197 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:14:54.019896 2044197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:14:54.028013 2044197 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:14:54.028670 2044197 kubeconfig.go:125] found "pause-583340" server: "https://192.168.76.2:8443"
	I1222 01:14:54.029599 2044197 kapi.go:59] client config for pause-583340: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 01:14:54.030123 2044197 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 01:14:54.030143 2044197 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 01:14:54.030149 2044197 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 01:14:54.030154 2044197 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 01:14:54.030162 2044197 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 01:14:54.030441 2044197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:14:54.040195 2044197 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:14:54.040268 2044197 kubeadm.go:602] duration metric: took 20.461561ms to restartPrimaryControlPlane
	I1222 01:14:54.040287 2044197 kubeadm.go:403] duration metric: took 84.390935ms to StartCluster
	I1222 01:14:54.040302 2044197 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:14:54.040384 2044197 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:14:54.041439 2044197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:14:54.041666 2044197 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:14:54.042070 2044197 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:14:54.042198 2044197 config.go:182] Loaded profile config "pause-583340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:14:54.048075 2044197 out.go:179] * Verifying Kubernetes components...
	I1222 01:14:54.048075 2044197 out.go:179] * Enabled addons: 
	I1222 01:14:50.361994 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:14:50.378031 2043948 cli_runner.go:211] docker container inspect missing-upgrade-158754 --format={{.State.Status}} returned with exit code 1
	I1222 01:14:50.378094 2043948 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:50.378107 2043948 oci.go:673] temporary error: container missing-upgrade-158754 status is  but expect it to be exited
	I1222 01:14:50.378170 2043948 retry.go:84] will retry after 4.3s: couldn't verify container is exited. %v: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:54.050984 2044197 addons.go:530] duration metric: took 8.916043ms for enable addons: enabled=[]
	I1222 01:14:54.051103 2044197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:14:54.183323 2044197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:14:54.197920 2044197 node_ready.go:35] waiting up to 6m0s for node "pause-583340" to be "Ready" ...
	I1222 01:14:54.648926 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:14:54.666474 2043948 cli_runner.go:211] docker container inspect missing-upgrade-158754 --format={{.State.Status}} returned with exit code 1
	I1222 01:14:54.666542 2043948 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	I1222 01:14:54.666555 2043948 oci.go:673] temporary error: container missing-upgrade-158754 status is  but expect it to be exited
	I1222 01:14:54.666588 2043948 oci.go:88] couldn't shut down missing-upgrade-158754 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-158754": docker container inspect missing-upgrade-158754 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-158754
	 
	I1222 01:14:54.666649 2043948 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-158754
	I1222 01:14:54.681392 2043948 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-158754
	W1222 01:14:54.701614 2043948 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-158754 returned with exit code 1
	I1222 01:14:54.701727 2043948 cli_runner.go:164] Run: docker network inspect missing-upgrade-158754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:14:54.717302 2043948 cli_runner.go:164] Run: docker network rm missing-upgrade-158754
	I1222 01:14:54.830184 2043948 fix.go:124] Sleeping 1 second for extra luck!
	I1222 01:14:55.830459 2043948 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:14:55.834219 2043948 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:14:55.834350 2043948 start.go:159] libmachine.API.Create for "missing-upgrade-158754" (driver="docker")
	I1222 01:14:55.834389 2043948 client.go:173] LocalClient.Create starting
	I1222 01:14:55.834507 2043948 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:14:55.834543 2043948 main.go:144] libmachine: Decoding PEM data...
	I1222 01:14:55.834568 2043948 main.go:144] libmachine: Parsing certificate...
	I1222 01:14:55.834625 2043948 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:14:55.834648 2043948 main.go:144] libmachine: Decoding PEM data...
	I1222 01:14:55.834671 2043948 main.go:144] libmachine: Parsing certificate...
	I1222 01:14:55.834940 2043948 cli_runner.go:164] Run: docker network inspect missing-upgrade-158754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:14:55.854428 2043948 cli_runner.go:211] docker network inspect missing-upgrade-158754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:14:55.854508 2043948 network_create.go:284] running [docker network inspect missing-upgrade-158754] to gather additional debugging logs...
	I1222 01:14:55.854525 2043948 cli_runner.go:164] Run: docker network inspect missing-upgrade-158754
	W1222 01:14:55.881555 2043948 cli_runner.go:211] docker network inspect missing-upgrade-158754 returned with exit code 1
	I1222 01:14:55.881591 2043948 network_create.go:287] error running [docker network inspect missing-upgrade-158754]: docker network inspect missing-upgrade-158754: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-158754 not found
	I1222 01:14:55.881606 2043948 network_create.go:289] output of [docker network inspect missing-upgrade-158754]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-158754 not found
	
	** /stderr **
	I1222 01:14:55.881730 2043948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:14:55.921292 2043948 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:14:55.921629 2043948 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:14:55.921953 2043948 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:14:55.922289 2043948 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f7dfb485d34 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:c1:7c:67:4e:65} reservation:<nil>}
	I1222 01:14:55.922715 2043948 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b829f0}
	I1222 01:14:55.922752 2043948 network_create.go:124] attempt to create docker network missing-upgrade-158754 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:14:55.922810 2043948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-158754 missing-upgrade-158754
	I1222 01:14:56.004672 2043948 network_create.go:108] docker network missing-upgrade-158754 192.168.85.0/24 created
	I1222 01:14:56.004722 2043948 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-158754" container
	I1222 01:14:56.004819 2043948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:14:56.046418 2043948 cli_runner.go:164] Run: docker volume create missing-upgrade-158754 --label name.minikube.sigs.k8s.io=missing-upgrade-158754 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:14:56.073583 2043948 oci.go:103] Successfully created a docker volume missing-upgrade-158754
	I1222 01:14:56.073689 2043948 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-158754-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-158754 --entrypoint /usr/bin/test -v missing-upgrade-158754:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1222 01:14:56.571638 2043948 oci.go:107] Successfully prepared a docker volume missing-upgrade-158754
	I1222 01:14:56.571698 2043948 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1222 01:14:56.571708 2043948 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:14:56.571772 2043948 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-158754:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:15:03.055612 2043948 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-158754:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (6.483804745s)
	I1222 01:15:03.055683 2043948 kic.go:203] duration metric: took 6.483970557s to extract preloaded images to volume ...
	W1222 01:15:03.055822 2043948 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:15:03.055933 2043948 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:15:03.160031 2043948 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-158754 --name missing-upgrade-158754 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-158754 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-158754 --network missing-upgrade-158754 --ip 192.168.85.2 --volume missing-upgrade-158754:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1222 01:15:03.550022 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Running}}
	I1222 01:15:03.584511 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	I1222 01:15:03.612143 2043948 cli_runner.go:164] Run: docker exec missing-upgrade-158754 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:15:03.669464 2043948 oci.go:144] the created container "missing-upgrade-158754" has a running status.
	I1222 01:15:03.669492 2043948 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa...
	I1222 01:15:03.806541 2043948 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:15:03.838013 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	I1222 01:15:03.864590 2043948 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:15:03.864609 2043948 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-158754 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:15:03.927032 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	I1222 01:15:03.950226 2043948 machine.go:94] provisionDockerMachine start ...
	I1222 01:15:03.950315 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:03.971555 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:03.973913 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:03.973966 2043948 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:15:03.975455 2043948 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:15:02.353045 2044197 node_ready.go:49] node "pause-583340" is "Ready"
	I1222 01:15:02.353079 2044197 node_ready.go:38] duration metric: took 8.155126141s for node "pause-583340" to be "Ready" ...
	I1222 01:15:02.353100 2044197 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:15:02.353163 2044197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:15:02.367852 2044197 api_server.go:72] duration metric: took 8.326148435s to wait for apiserver process to appear ...
	I1222 01:15:02.367878 2044197 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:15:02.367898 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:02.430748 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:15:02.430783 2044197 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:15:02.868191 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:02.878539 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:15:02.878572 2044197 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:15:03.368139 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:03.394180 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:15:03.394209 2044197 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:15:03.868365 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:03.897346 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:15:03.897401 2044197 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:15:07.100886 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-158754
	
	I1222 01:15:07.100956 2043948 ubuntu.go:182] provisioning hostname "missing-upgrade-158754"
	I1222 01:15:07.101060 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:07.118252 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:07.118569 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:07.118587 2043948 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-158754 && echo "missing-upgrade-158754" | sudo tee /etc/hostname
	I1222 01:15:07.253877 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-158754
	
	I1222 01:15:07.253978 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:07.273078 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:07.273444 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:07.273469 2043948 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-158754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-158754/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-158754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:15:07.401448 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:15:07.401471 2043948 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:15:07.401501 2043948 ubuntu.go:190] setting up certificates
	I1222 01:15:07.401516 2043948 provision.go:84] configureAuth start
	I1222 01:15:07.401579 2043948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-158754
	I1222 01:15:07.419722 2043948 provision.go:143] copyHostCerts
	I1222 01:15:07.419794 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:15:07.419806 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:15:07.419888 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:15:07.419995 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:15:07.420006 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:15:07.420071 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:15:07.420161 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:15:07.420173 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:15:07.420204 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:15:07.420273 2043948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-158754 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-158754]
	I1222 01:15:07.579332 2043948 provision.go:177] copyRemoteCerts
	I1222 01:15:07.579404 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:15:07.579449 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:07.602205 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:07.694876 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:15:07.723295 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1222 01:15:07.751685 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:15:07.778844 2043948 provision.go:87] duration metric: took 377.311065ms to configureAuth
	I1222 01:15:07.778871 2043948 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:15:07.779059 2043948 config.go:182] Loaded profile config "missing-upgrade-158754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1222 01:15:07.779180 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:07.798484 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:07.798792 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:07.798812 2043948 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:15:08.100938 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:15:08.100968 2043948 machine.go:97] duration metric: took 4.15071834s to provisionDockerMachine
	I1222 01:15:08.100980 2043948 client.go:176] duration metric: took 12.266582218s to LocalClient.Create
	I1222 01:15:08.100997 2043948 start.go:167] duration metric: took 12.266648169s to libmachine.API.Create "missing-upgrade-158754"
	I1222 01:15:08.101005 2043948 start.go:293] postStartSetup for "missing-upgrade-158754" (driver="docker")
	I1222 01:15:08.101018 2043948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:15:08.101100 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:15:08.101153 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.121911 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:08.218957 2043948 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:15:08.222118 2043948 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:15:08.222153 2043948 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1222 01:15:08.222164 2043948 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1222 01:15:08.222171 2043948 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1222 01:15:08.222181 2043948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:15:08.222242 2043948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:15:08.222332 2043948 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:15:08.222440 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:15:08.231432 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:15:08.256430 2043948 start.go:296] duration metric: took 155.40784ms for postStartSetup
	I1222 01:15:08.256815 2043948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-158754
	I1222 01:15:08.274420 2043948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/config.json ...
	I1222 01:15:08.274699 2043948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:15:08.274760 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.292121 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:08.378805 2043948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:15:08.383562 2043948 start.go:128] duration metric: took 12.553067968s to createHost
	I1222 01:15:08.383723 2043948 cli_runner.go:164] Run: docker container inspect missing-upgrade-158754 --format={{.State.Status}}
	W1222 01:15:08.401469 2043948 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:15:08.401497 2043948 machine.go:94] provisionDockerMachine start ...
	I1222 01:15:08.401576 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.419156 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:08.419476 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:08.419491 2043948 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:15:08.548586 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-158754
	
	I1222 01:15:08.548612 2043948 ubuntu.go:182] provisioning hostname "missing-upgrade-158754"
	I1222 01:15:08.548688 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.570871 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:08.571245 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:08.571268 2043948 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-158754 && echo "missing-upgrade-158754" | sudo tee /etc/hostname
	I1222 01:15:08.705192 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-158754
	
	I1222 01:15:08.705276 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:08.724013 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:08.724377 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:08.724401 2043948 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-158754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-158754/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-158754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:15:08.849439 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:15:08.849467 2043948 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:15:08.849484 2043948 ubuntu.go:190] setting up certificates
	I1222 01:15:08.849496 2043948 provision.go:84] configureAuth start
	I1222 01:15:08.849556 2043948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-158754
	I1222 01:15:08.867656 2043948 provision.go:143] copyHostCerts
	I1222 01:15:08.867729 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:15:08.867744 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:15:08.867814 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:15:08.867913 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:15:08.867928 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:15:08.867958 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:15:08.868021 2043948 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:15:08.868030 2043948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:15:08.868049 2043948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:15:08.868100 2043948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-158754 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-158754]
	I1222 01:15:04.368114 2044197 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:15:04.377601 2044197 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1222 01:15:04.378971 2044197 api_server.go:141] control plane version: v1.34.3
	I1222 01:15:04.378997 2044197 api_server.go:131] duration metric: took 2.011111252s to wait for apiserver health ...
	I1222 01:15:04.379006 2044197 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:15:04.384052 2044197 system_pods.go:59] 7 kube-system pods found
	I1222 01:15:04.384089 2044197 system_pods.go:61] "coredns-66bc5c9577-q9vtv" [c0f72789-c82a-4c15-9a16-b5fe1719ee81] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:15:04.384102 2044197 system_pods.go:61] "etcd-pause-583340" [999b060b-d37b-4922-a6f8-71276b8cf904] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:15:04.384109 2044197 system_pods.go:61] "kindnet-tw999" [1f4faa7e-1c28-47ad-997f-e0f221fb7da9] Running
	I1222 01:15:04.384115 2044197 system_pods.go:61] "kube-apiserver-pause-583340" [66197fa8-541e-4d4e-ba96-4c264c138c03] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:15:04.384128 2044197 system_pods.go:61] "kube-controller-manager-pause-583340" [7c2ed5ea-dd6a-43de-8400-d40c0471b7eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:15:04.384135 2044197 system_pods.go:61] "kube-proxy-wtzct" [7b1b5156-bdf0-4050-bbde-959b400ff9c7] Running
	I1222 01:15:04.384141 2044197 system_pods.go:61] "kube-scheduler-pause-583340" [20f43e42-c175-4603-b1a0-6e498f3cc2c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:15:04.384150 2044197 system_pods.go:74] duration metric: took 5.138633ms to wait for pod list to return data ...
	I1222 01:15:04.384161 2044197 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:15:04.394597 2044197 default_sa.go:45] found service account: "default"
	I1222 01:15:04.394626 2044197 default_sa.go:55] duration metric: took 10.457797ms for default service account to be created ...
	I1222 01:15:04.394638 2044197 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:15:04.404616 2044197 system_pods.go:86] 7 kube-system pods found
	I1222 01:15:04.404750 2044197 system_pods.go:89] "coredns-66bc5c9577-q9vtv" [c0f72789-c82a-4c15-9a16-b5fe1719ee81] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:15:04.404776 2044197 system_pods.go:89] "etcd-pause-583340" [999b060b-d37b-4922-a6f8-71276b8cf904] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:15:04.404816 2044197 system_pods.go:89] "kindnet-tw999" [1f4faa7e-1c28-47ad-997f-e0f221fb7da9] Running
	I1222 01:15:04.404844 2044197 system_pods.go:89] "kube-apiserver-pause-583340" [66197fa8-541e-4d4e-ba96-4c264c138c03] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:15:04.404865 2044197 system_pods.go:89] "kube-controller-manager-pause-583340" [7c2ed5ea-dd6a-43de-8400-d40c0471b7eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:15:04.404897 2044197 system_pods.go:89] "kube-proxy-wtzct" [7b1b5156-bdf0-4050-bbde-959b400ff9c7] Running
	I1222 01:15:04.404921 2044197 system_pods.go:89] "kube-scheduler-pause-583340" [20f43e42-c175-4603-b1a0-6e498f3cc2c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:15:04.404941 2044197 system_pods.go:126] duration metric: took 10.296046ms to wait for k8s-apps to be running ...
	I1222 01:15:04.404975 2044197 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:15:04.405064 2044197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:15:04.431147 2044197 system_svc.go:56] duration metric: took 26.146491ms WaitForService to wait for kubelet
	I1222 01:15:04.431180 2044197 kubeadm.go:587] duration metric: took 10.389480678s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:15:04.431202 2044197 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:15:04.435554 2044197 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:15:04.435587 2044197 node_conditions.go:123] node cpu capacity is 2
	I1222 01:15:04.435601 2044197 node_conditions.go:105] duration metric: took 4.393351ms to run NodePressure ...
	I1222 01:15:04.435614 2044197 start.go:242] waiting for startup goroutines ...
	I1222 01:15:04.435622 2044197 start.go:247] waiting for cluster config update ...
	I1222 01:15:04.435630 2044197 start.go:256] writing updated cluster config ...
	I1222 01:15:04.435958 2044197 ssh_runner.go:195] Run: rm -f paused
	I1222 01:15:04.444837 2044197 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:15:04.445545 2044197 kapi.go:59] client config for pause-583340: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/pause-583340/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 01:15:04.448443 2044197 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q9vtv" in "kube-system" namespace to be "Ready" or be gone ...
	W1222 01:15:06.454183 2044197 pod_ready.go:104] pod "coredns-66bc5c9577-q9vtv" is not "Ready", error: <nil>
	I1222 01:15:07.954673 2044197 pod_ready.go:94] pod "coredns-66bc5c9577-q9vtv" is "Ready"
	I1222 01:15:07.954702 2044197 pod_ready.go:86] duration metric: took 3.506236882s for pod "coredns-66bc5c9577-q9vtv" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:07.958058 2044197 pod_ready.go:83] waiting for pod "etcd-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:09.224341 2043948 provision.go:177] copyRemoteCerts
	I1222 01:15:09.224409 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:15:09.224450 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.242907 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:09.334862 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1222 01:15:09.361207 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:15:09.386974 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:15:09.412399 2043948 provision.go:87] duration metric: took 562.887435ms to configureAuth
	I1222 01:15:09.412427 2043948 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:15:09.412651 2043948 config.go:182] Loaded profile config "missing-upgrade-158754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1222 01:15:09.412851 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.430484 2043948 main.go:144] libmachine: Using SSH client type: native
	I1222 01:15:09.430799 2043948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38666 <nil> <nil>}
	I1222 01:15:09.430818 2043948 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:15:09.704154 2043948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:15:09.704174 2043948 machine.go:97] duration metric: took 1.302669347s to provisionDockerMachine
	I1222 01:15:09.704185 2043948 start.go:293] postStartSetup for "missing-upgrade-158754" (driver="docker")
	I1222 01:15:09.704197 2043948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:15:09.704265 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:15:09.704306 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.724769 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:09.819805 2043948 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:15:09.823305 2043948 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:15:09.823336 2043948 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1222 01:15:09.823346 2043948 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1222 01:15:09.823353 2043948 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1222 01:15:09.823363 2043948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:15:09.823430 2043948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:15:09.823511 2043948 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:15:09.823617 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:15:09.832390 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:15:09.858896 2043948 start.go:296] duration metric: took 154.695674ms for postStartSetup
	I1222 01:15:09.858977 2043948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:15:09.859019 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.877565 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:09.967723 2043948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:15:09.972300 2043948 fix.go:56] duration metric: took 30.540201566s for fixHost
	I1222 01:15:09.972335 2043948 start.go:83] releasing machines lock for "missing-upgrade-158754", held for 30.540263046s
	I1222 01:15:09.972423 2043948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-158754
	I1222 01:15:09.989776 2043948 ssh_runner.go:195] Run: cat /version.json
	I1222 01:15:09.989839 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:09.990096 2043948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:15:09.990159 2043948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-158754
	I1222 01:15:10.010556 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	I1222 01:15:10.023368 2043948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38666 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/missing-upgrade-158754/id_rsa Username:docker}
	W1222 01:15:10.113833 2043948 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1222 01:15:10.113953 2043948 ssh_runner.go:195] Run: systemctl --version
	I1222 01:15:10.279703 2043948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:15:10.425423 2043948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 01:15:10.429899 2043948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:15:10.453272 2043948 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1222 01:15:10.453395 2043948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:15:10.495885 2043948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1222 01:15:10.495909 2043948 start.go:496] detecting cgroup driver to use...
	I1222 01:15:10.495946 2043948 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:15:10.496015 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:15:10.517936 2043948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:15:10.530313 2043948 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:15:10.530433 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:15:10.544197 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:15:10.560299 2043948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:15:10.652121 2043948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:15:10.770046 2043948 docker.go:234] disabling docker service ...
	I1222 01:15:10.770113 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:15:10.793298 2043948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:15:10.805307 2043948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:15:10.895648 2043948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:15:11.015548 2043948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:15:11.028545 2043948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:15:11.046259 2043948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1222 01:15:11.046324 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.056735 2043948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:15:11.056808 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.067340 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.079491 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.090978 2043948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:15:11.101416 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.112169 2043948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.129991 2043948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:15:11.141202 2043948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:15:11.151680 2043948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:15:11.160617 2043948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:15:11.259025 2043948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:15:11.377661 2043948 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:15:11.377787 2043948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:15:11.381568 2043948 start.go:564] Will wait 60s for crictl version
	I1222 01:15:11.381651 2043948 ssh_runner.go:195] Run: which crictl
	I1222 01:15:11.385203 2043948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1222 01:15:11.423702 2043948 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1222 01:15:11.423848 2043948 ssh_runner.go:195] Run: crio --version
	I1222 01:15:11.474234 2043948 ssh_runner.go:195] Run: crio --version
	I1222 01:15:11.520785 2043948 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1222 01:15:11.523641 2043948 cli_runner.go:164] Run: docker network inspect missing-upgrade-158754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:15:11.539601 2043948 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:15:11.543342 2043948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:15:11.554113 2043948 kubeadm.go:884] updating cluster {Name:missing-upgrade-158754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-158754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:15:11.554239 2043948 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1222 01:15:11.554292 2043948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:15:11.633032 2043948 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:15:11.633052 2043948 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:15:11.633105 2043948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:15:11.681442 2043948 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:15:11.681463 2043948 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:15:11.681471 2043948 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.32.0 crio true true} ...
	I1222 01:15:11.681560 2043948 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-158754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-158754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:15:11.681645 2043948 ssh_runner.go:195] Run: crio config
	I1222 01:15:11.730993 2043948 cni.go:84] Creating CNI manager for ""
	I1222 01:15:11.731018 2043948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:15:11.731036 2043948 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:15:11.731081 2043948 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-158754 NodeName:missing-upgrade-158754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:15:11.731219 2043948 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-158754"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:15:11.731300 2043948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1222 01:15:11.740075 2043948 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:15:11.740192 2043948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:15:11.748844 2043948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:15:11.767261 2043948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:15:11.785127 2043948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 01:15:11.802897 2043948 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:15:11.806331 2043948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:15:11.817130 2043948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:15:11.903520 2043948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:15:11.920811 2043948 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754 for IP: 192.168.85.2
	I1222 01:15:11.920831 2043948 certs.go:195] generating shared ca certs ...
	I1222 01:15:11.920845 2043948 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:15:11.920977 2043948 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:15:11.921025 2043948 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:15:11.921032 2043948 certs.go:257] generating profile certs ...
	I1222 01:15:11.921118 2043948 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/client.key
	I1222 01:15:11.921179 2043948 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/apiserver.key.0b3f80f1
	I1222 01:15:11.921215 2043948 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/proxy-client.key
	I1222 01:15:11.921356 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:15:11.921392 2043948 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:15:11.921400 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:15:11.921435 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:15:11.921462 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:15:11.921486 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:15:11.921563 2043948 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:15:11.922112 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:15:11.981230 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:15:12.051413 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:15:12.085628 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:15:12.115168 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1222 01:15:12.139772 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:15:12.164268 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:15:12.187811 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:15:12.210971 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:15:12.235132 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:15:12.262996 2043948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:15:12.286928 2043948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:15:12.305090 2043948 ssh_runner.go:195] Run: openssl version
	I1222 01:15:12.310386 2043948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:15:12.318484 2043948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:15:12.326878 2043948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:15:12.330109 2043948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:15:12.330179 2043948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:15:12.336999 2043948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:15:12.345419 2043948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:15:12.353620 2043948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:15:12.361897 2043948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:15:12.370444 2043948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:15:12.373901 2043948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:15:12.373975 2043948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:15:12.380793 2043948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:15:12.389494 2043948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:15:12.398122 2043948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:15:12.406603 2043948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:15:12.415290 2043948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:15:12.419018 2043948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:15:12.419105 2043948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:15:12.426084 2043948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:15:12.435053 2043948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:15:12.443291 2043948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:15:12.447030 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:15:12.453845 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:15:12.461806 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:15:12.468932 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:15:12.475527 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:15:12.483225 2043948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:15:12.490317 2043948 kubeadm.go:401] StartCluster: {Name:missing-upgrade-158754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-158754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:15:12.490398 2043948 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:15:12.490481 2043948 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:15:12.527990 2043948 cri.go:96] found id: ""
	I1222 01:15:12.528120 2043948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:15:12.537045 2043948 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:15:12.537125 2043948 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:15:12.537188 2043948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:15:12.546569 2043948 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:15:12.547219 2043948 kubeconfig.go:125] found "missing-upgrade-158754" server: "https://192.168.85.2:8443"
	I1222 01:15:12.548050 2043948 kapi.go:59] client config for missing-upgrade-158754: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/missing-upgrade-158754/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 01:15:12.548533 2043948 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 01:15:12.548560 2043948 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 01:15:12.548566 2043948 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 01:15:12.548571 2043948 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 01:15:12.548576 2043948 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 01:15:12.548852 2043948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:15:12.558008 2043948 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 01:14:19.743451667 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 01:15:11.793858162 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1222 01:15:12.558069 2043948 kubeadm.go:1161] stopping kube-system containers ...
	I1222 01:15:12.558087 2043948 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1222 01:15:12.558148 2043948 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:15:12.606028 2043948 cri.go:96] found id: ""
	I1222 01:15:12.606138 2043948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 01:15:12.618900 2043948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:15:12.628227 2043948 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:15:12.628249 2043948 kubeadm.go:158] found existing configuration files:
	
	I1222 01:15:12.628309 2043948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:15:12.636984 2043948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:15:12.637087 2043948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:15:12.645476 2043948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:15:12.654096 2043948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:15:12.654160 2043948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:15:12.663150 2043948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:15:12.672121 2043948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:15:12.672191 2043948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:15:12.681103 2043948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:15:12.689395 2043948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:15:12.689461 2043948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:15:12.698370 2043948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:15:12.707277 2043948 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:15:12.754216 2043948 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	W1222 01:15:09.964359 2044197 pod_ready.go:104] pod "etcd-pause-583340" is not "Ready", error: <nil>
	W1222 01:15:11.999593 2044197 pod_ready.go:104] pod "etcd-pause-583340" is not "Ready", error: <nil>
	I1222 01:15:13.463612 2044197 pod_ready.go:94] pod "etcd-pause-583340" is "Ready"
	I1222 01:15:13.463639 2044197 pod_ready.go:86] duration metric: took 5.505552097s for pod "etcd-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:13.466444 2044197 pod_ready.go:83] waiting for pod "kube-apiserver-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:13.474167 2044197 pod_ready.go:94] pod "kube-apiserver-pause-583340" is "Ready"
	I1222 01:15:13.474192 2044197 pod_ready.go:86] duration metric: took 7.726459ms for pod "kube-apiserver-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:13.477207 2044197 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:14.983943 2044197 pod_ready.go:94] pod "kube-controller-manager-pause-583340" is "Ready"
	I1222 01:15:14.983972 2044197 pod_ready.go:86] duration metric: took 1.506743191s for pod "kube-controller-manager-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:14.986957 2044197 pod_ready.go:83] waiting for pod "kube-proxy-wtzct" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:14.992370 2044197 pod_ready.go:94] pod "kube-proxy-wtzct" is "Ready"
	I1222 01:15:14.992394 2044197 pod_ready.go:86] duration metric: took 5.413734ms for pod "kube-proxy-wtzct" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:15.061761 2044197 pod_ready.go:83] waiting for pod "kube-scheduler-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:15.460868 2044197 pod_ready.go:94] pod "kube-scheduler-pause-583340" is "Ready"
	I1222 01:15:15.460895 2044197 pod_ready.go:86] duration metric: took 399.107562ms for pod "kube-scheduler-pause-583340" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:15:15.460907 2044197 pod_ready.go:40] duration metric: took 11.016026719s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:15:15.540908 2044197 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:15:15.544271 2044197 out.go:179] * Done! kubectl is now configured to use "pause-583340" cluster and "default" namespace by default
	I1222 01:15:15.300814 2043948 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.546562114s)
	I1222 01:15:15.300906 2043948 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:15:15.456148 2043948 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:15:15.565289 2043948 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:15:15.693737 2043948 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:15:15.693809 2043948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:15:16.193935 2043948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:15:16.694650 2043948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:15:16.723819 2043948 api_server.go:72] duration metric: took 1.030088638s to wait for apiserver process to appear ...
	I1222 01:15:16.723846 2043948 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:15:16.723869 2043948 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.708635244Z" level=info msg="Started container" PID=2445 containerID=a5bf8f4c8d241d96ef93e334dc05a1cdf4eb294ea5b8916776586971f60054be description=kube-system/kindnet-tw999/kindnet-cni id=28adf616-3110-424a-912d-5a8aea19e95b name=/runtime.v1.RuntimeService/StartContainer sandboxID=097e191e3b64218bf698323c3402eefb1446e5bfbcd3dcfc2d1942db6bf65cb4
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.729643176Z" level=info msg="Created container 2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e: kube-system/kube-scheduler-pause-583340/kube-scheduler" id=a349b478-ae85-4786-9d64-4e1ed9bf8039 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.736475992Z" level=info msg="Created container 4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc: kube-system/kube-controller-manager-pause-583340/kube-controller-manager" id=d40122d6-b4ef-4537-b663-0d4a5bb116f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.737308402Z" level=info msg="Starting container: 4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc" id=0389e9d3-8643-4280-bbea-53ffa0b72a8d name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.740570768Z" level=info msg="Starting container: 2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e" id=32c6d50d-4bf6-41dc-b1e2-687572127c8f name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.743207831Z" level=info msg="Started container" PID=2455 containerID=4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc description=kube-system/kube-controller-manager-pause-583340/kube-controller-manager id=0389e9d3-8643-4280-bbea-53ffa0b72a8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb611e0e63292b12038c9cd74c1a6760020dacfdcf294dee5cd4848819e89636
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.744756428Z" level=info msg="Started container" PID=2449 containerID=2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e description=kube-system/kube-scheduler-pause-583340/kube-scheduler id=32c6d50d-4bf6-41dc-b1e2-687572127c8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=f557e604e139cf8c956e348cebad280a5cac487288bf2a50a38fe6f97c716451
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.750117298Z" level=info msg="Created container 0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814: kube-system/coredns-66bc5c9577-q9vtv/coredns" id=e868291f-ec3d-4f0e-8174-7796e249873d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.751654194Z" level=info msg="Starting container: 0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814" id=f64cbcc3-c3f5-4ffd-ac34-2223a44463ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:14:55 pause-583340 crio[2142]: time="2025-12-22T01:14:55.753435654Z" level=info msg="Started container" PID=2461 containerID=0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814 description=kube-system/coredns-66bc5c9577-q9vtv/coredns id=f64cbcc3-c3f5-4ffd-ac34-2223a44463ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bdb5f67b60da5f174b107a4c9c8743d38b6eafcd00913306038af8ee3ee4c23
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.12293679Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.127151192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.127323256Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.127409785Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.131352975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.131388576Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.131413667Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.135800068Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.135837794Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.135860636Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.139206331Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.139243212Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.139274923Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.142578937Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:15:06 pause-583340 crio[2142]: time="2025-12-22T01:15:06.142633336Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0acd4974cdd60       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     27 seconds ago       Running             coredns                   1                   4bdb5f67b60da       coredns-66bc5c9577-q9vtv               kube-system
	4e4ea80f48321       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     27 seconds ago       Running             kube-controller-manager   1                   cb611e0e63292       kube-controller-manager-pause-583340   kube-system
	2ca331b193d96       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     27 seconds ago       Running             kube-scheduler            1                   f557e604e139c       kube-scheduler-pause-583340            kube-system
	a5bf8f4c8d241       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     27 seconds ago       Running             kindnet-cni               1                   097e191e3b642       kindnet-tw999                          kube-system
	360534ca8ae0d       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     27 seconds ago       Running             kube-proxy                1                   b0b93a08eadb8       kube-proxy-wtzct                       kube-system
	53f57d7cd6467       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     27 seconds ago       Running             kube-apiserver            1                   10f5859055244       kube-apiserver-pause-583340            kube-system
	d986846af4e9c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     27 seconds ago       Running             etcd                      1                   c41b0fd246654       etcd-pause-583340                      kube-system
	5a5697ce35641       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     40 seconds ago       Exited              coredns                   0                   4bdb5f67b60da       coredns-66bc5c9577-q9vtv               kube-system
	d1433bb270cb8       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   51 seconds ago       Exited              kindnet-cni               0                   097e191e3b642       kindnet-tw999                          kube-system
	e8bde2a9b810e       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     54 seconds ago       Exited              kube-proxy                0                   b0b93a08eadb8       kube-proxy-wtzct                       kube-system
	e9c4b91f92f67       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     About a minute ago   Exited              kube-controller-manager   0                   cb611e0e63292       kube-controller-manager-pause-583340   kube-system
	75d17f9897ffa       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     About a minute ago   Exited              kube-scheduler            0                   f557e604e139c       kube-scheduler-pause-583340            kube-system
	1704cd445120e       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     About a minute ago   Exited              kube-apiserver            0                   10f5859055244       kube-apiserver-pause-583340            kube-system
	bc34fa5a980ae       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     About a minute ago   Exited              etcd                      0                   c41b0fd246654       etcd-pause-583340                      kube-system
	
	
	==> coredns [0acd4974cdd60ab62ea622a3e62d2c9aef87740062eaf66da9daea93acab6814] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43462 - 37570 "HINFO IN 5849911280662542970.4270756862494641719. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034754661s
	
	
	==> coredns [5a5697ce35641b6523b23f1835ed38471d05acae1a2921520161c873e721fabb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37818 - 17360 "HINFO IN 8738458638431759642.221725157045792913. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022546748s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-583340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-583340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=pause-583340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_14_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:14:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-583340
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:15:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:15:13 +0000   Mon, 22 Dec 2025 01:14:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:15:13 +0000   Mon, 22 Dec 2025 01:14:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:15:13 +0000   Mon, 22 Dec 2025 01:14:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:15:13 +0000   Mon, 22 Dec 2025 01:14:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-583340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                83a8f616-4c79-4941-ae7f-7fd350bcc280
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q9vtv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-pause-583340                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-tw999                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-pause-583340             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-pause-583340    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-wtzct                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-pause-583340             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node pause-583340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node pause-583340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node pause-583340 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node pause-583340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node pause-583340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node pause-583340 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node pause-583340 event: Registered Node pause-583340 in Controller
	  Normal   NodeReady                41s                kubelet          Node pause-583340 status is now: NodeReady
	  Normal   RegisteredNode           17s                node-controller  Node pause-583340 event: Registered Node pause-583340 in Controller
	
	
	==> dmesg <==
	[Dec22 00:51] overlayfs: idmapped layers are currently not supported
	[Dec22 00:53] overlayfs: idmapped layers are currently not supported
	[Dec22 00:54] overlayfs: idmapped layers are currently not supported
	[ +51.911977] overlayfs: idmapped layers are currently not supported
	[  +3.821564] overlayfs: idmapped layers are currently not supported
	[Dec22 00:55] overlayfs: idmapped layers are currently not supported
	[Dec22 00:56] overlayfs: idmapped layers are currently not supported
	[Dec22 00:57] overlayfs: idmapped layers are currently not supported
	[  +3.314214] overlayfs: idmapped layers are currently not supported
	[Dec22 00:58] overlayfs: idmapped layers are currently not supported
	[Dec22 00:59] overlayfs: idmapped layers are currently not supported
	[Dec22 01:00] overlayfs: idmapped layers are currently not supported
	[Dec22 01:04] overlayfs: idmapped layers are currently not supported
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bc34fa5a980aea7a90312cc2cd3c62d2994e3be1c3b9e8cb44a2834dca4c396a] <==
	{"level":"warn","ts":"2025-12-22T01:14:17.849647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.867745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.893594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.912949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.934207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:17.948398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:18.114607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38580","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-22T01:14:45.634525Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-22T01:14:45.634588Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-583340","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-22T01:14:45.634679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-22T01:14:45.634748Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-22T01:14:45.789079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-22T01:14:45.789146Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-22T01:14:45.789220Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-22T01:14:45.789236Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-22T01:14:45.789229Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-22T01:14:45.789340Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-22T01:14:45.789376Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-22T01:14:45.789462Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-22T01:14:45.789481Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-22T01:14:45.789489Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-22T01:14:45.792530Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-22T01:14:45.792618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-22T01:14:45.792646Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-22T01:14:45.792666Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-583340","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [d986846af4e9c4f8639a53c9046584c7dabd234f69aad5a9cf63520365ad4dca] <==
	{"level":"warn","ts":"2025-12-22T01:14:59.318016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.419829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.454331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.471207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.513976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.579925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.591897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.638648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.671602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.701840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.725582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.759577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.792266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.829290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.863993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.913629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.935252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.983042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:14:59.995715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.023518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.073639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.158207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.188640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.273257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:15:00.619324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:15:23 up 1 day,  7:57,  0 user,  load average: 3.38, 2.20, 1.92
	Linux pause-583340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a5bf8f4c8d241d96ef93e334dc05a1cdf4eb294ea5b8916776586971f60054be] <==
	I1222 01:14:55.835830       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:14:55.837652       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1222 01:14:55.837894       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:14:55.840172       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:14:55.840259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:14:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:14:56.122611       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:14:56.122710       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:14:56.122760       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:14:56.125515       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1222 01:15:02.523134       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:15:02.523215       1 metrics.go:72] Registering metrics
	I1222 01:15:02.523302       1 controller.go:711] "Syncing nftables rules"
	I1222 01:15:06.122512       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1222 01:15:06.122574       1 main.go:301] handling current node
	I1222 01:15:16.122711       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1222 01:15:16.122744       1 main.go:301] handling current node
	
	
	==> kindnet [d1433bb270cb8215342f9ee03eba858ebc0bff54f87693fec5575b4655d4dfa0] <==
	I1222 01:14:30.840641       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:14:30.921567       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1222 01:14:30.921766       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:14:30.921807       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:14:30.921843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:14:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:14:31.039532       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:14:31.045911       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:14:31.046250       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:14:31.046415       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1222 01:14:31.158116       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:14:31.221367       1 metrics.go:72] Registering metrics
	I1222 01:14:31.222242       1 controller.go:711] "Syncing nftables rules"
	I1222 01:14:41.044827       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1222 01:14:41.046112       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1704cd445120eba334cf13f2046f9af5f8dbc91bd484b46f2b98f576b2c49ae2] <==
	W1222 01:14:45.650335       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650382       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650456       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650538       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650583       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650661       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650702       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650765       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650813       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.650855       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.652589       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.652716       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.652802       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.652931       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653012       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653061       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653103       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653185       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653250       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653304       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653394       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653625       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653695       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1222 01:14:45.653755       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [53f57d7cd64676b980d9c9d9c9843515e2f0726aa6640a5bdb474959c5074223] <==
	I1222 01:15:02.427025       1 policy_source.go:240] refreshing policies
	I1222 01:15:02.435516       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:15:02.461085       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1222 01:15:02.461407       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1222 01:15:02.461565       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1222 01:15:02.462418       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1222 01:15:02.462470       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1222 01:15:02.465146       1 aggregator.go:171] initial CRD sync complete...
	I1222 01:15:02.465200       1 autoregister_controller.go:144] Starting autoregister controller
	I1222 01:15:02.465240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1222 01:15:02.465268       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:15:02.477190       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1222 01:15:02.481494       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1222 01:15:02.481814       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1222 01:15:02.481879       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1222 01:15:02.485928       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:15:02.494224       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1222 01:15:02.494294       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1222 01:15:02.507046       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1222 01:15:03.065043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:15:04.351355       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1222 01:15:05.774220       1 controller.go:667] quota admission added evaluator for: endpoints
	I1222 01:15:05.974008       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1222 01:15:06.023746       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:15:06.127082       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [4e4ea80f4832114dfd7412c15002bcddfc6d855e784903c1a9f62c65694decdc] <==
	I1222 01:15:05.722220       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1222 01:15:05.722310       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1222 01:15:05.722380       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-583340"
	I1222 01:15:05.722428       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1222 01:15:05.725024       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1222 01:15:05.726195       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:15:05.727592       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:15:05.729803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1222 01:15:05.730999       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1222 01:15:05.733228       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1222 01:15:05.734419       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1222 01:15:05.754781       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:15:05.757958       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1222 01:15:05.765447       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1222 01:15:05.765538       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1222 01:15:05.765622       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1222 01:15:05.765456       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:15:05.765962       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1222 01:15:05.766045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1222 01:15:05.766177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1222 01:15:05.767380       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:15:05.769495       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1222 01:15:05.771014       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1222 01:15:05.785923       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1222 01:15:05.787125       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea] <==
	I1222 01:14:26.292969       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1222 01:14:26.298629       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1222 01:14:26.302847       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:14:26.304773       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-583340" podCIDRs=["10.244.0.0/24"]
	I1222 01:14:26.308385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1222 01:14:26.308500       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1222 01:14:26.310859       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1222 01:14:26.311112       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1222 01:14:26.311522       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:14:26.313173       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1222 01:14:26.313240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1222 01:14:26.313253       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1222 01:14:26.315191       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1222 01:14:26.315273       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1222 01:14:26.320500       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1222 01:14:26.320636       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1222 01:14:26.321900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:14:26.321975       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1222 01:14:26.322005       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1222 01:14:26.327935       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1222 01:14:26.330174       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:14:26.341137       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:14:26.359368       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1222 01:14:26.363277       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1222 01:14:26.379134       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [360534ca8ae0ddd9193a87336eb1e700824be47538456a2172361591d20fdd15] <==
	I1222 01:14:55.803288       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:14:56.794726       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:15:02.496369       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:15:02.499936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1222 01:15:02.500919       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:15:02.612372       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:15:02.612434       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:15:02.617961       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:15:02.618264       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:15:02.618278       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:15:02.619774       1 config.go:200] "Starting service config controller"
	I1222 01:15:02.619784       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:15:02.619800       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:15:02.619804       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:15:02.619815       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:15:02.619819       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:15:02.623930       1 config.go:309] "Starting node config controller"
	I1222 01:15:02.624010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:15:02.624019       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:15:02.720250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1222 01:15:02.729238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:15:02.729276       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e8bde2a9b810e4304e64db22e4b461d56e70c2f35c0012314c20f357ef18c242] <==
	I1222 01:14:28.172045       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:14:28.265487       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:14:28.366267       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:14:28.366336       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1222 01:14:28.366422       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:14:28.434008       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:14:28.434218       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:14:28.444496       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:14:28.445317       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:14:28.445386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:14:28.484946       1 config.go:200] "Starting service config controller"
	I1222 01:14:28.484964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:14:28.484983       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:14:28.484987       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:14:28.484998       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:14:28.485001       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:14:28.485592       1 config.go:309] "Starting node config controller"
	I1222 01:14:28.485601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:14:28.485608       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:14:28.586176       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:14:28.586259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1222 01:14:28.586461       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2ca331b193d9678c9760cf085d1e2e91ec3fa52c84da712b29b7d720e42ffa7e] <==
	I1222 01:15:01.159388       1 serving.go:386] Generated self-signed cert in-memory
	I1222 01:15:03.514366       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1222 01:15:03.514483       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:15:03.520973       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1222 01:15:03.521019       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1222 01:15:03.521049       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:15:03.521057       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:15:03.521074       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1222 01:15:03.521086       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1222 01:15:03.521267       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 01:15:03.521410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1222 01:15:03.622607       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1222 01:15:03.622642       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1222 01:15:03.622659       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9] <==
	E1222 01:14:19.424258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1222 01:14:19.424373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1222 01:14:19.424490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1222 01:14:19.424599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 01:14:19.424748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1222 01:14:19.424864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 01:14:19.425035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1222 01:14:19.425152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1222 01:14:20.261552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1222 01:14:20.325551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1222 01:14:20.355679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 01:14:20.385891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1222 01:14:20.394577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1222 01:14:20.463707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1222 01:14:20.481875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1222 01:14:20.527099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1222 01:14:20.527244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 01:14:20.585540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1222 01:14:23.460580       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:14:45.629905       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1222 01:14:45.629936       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1222 01:14:45.629954       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1222 01:14:45.629982       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:14:45.630149       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1222 01:14:45.630164       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 22 01:14:55 pause-583340 kubelet[1336]: I1222 01:14:55.579709    1336 scope.go:117] "RemoveContainer" containerID="e9c4b91f92f6760ac6463c530244b60b3c5fc40c101ffeedaf8aab3ed6c6ebea"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.580282    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-tw999\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1f4faa7e-1c28-47ad-997f-e0f221fb7da9" pod="kube-system/kindnet-tw999"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.580610    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtzct\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7b1b5156-bdf0-4050-bbde-959b400ff9c7" pod="kube-system/kube-proxy-wtzct"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581000    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q9vtv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c0f72789-c82a-4c15-9a16-b5fe1719ee81" pod="kube-system/coredns-66bc5c9577-q9vtv"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581237    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c678e99cd3083231a93cbd7ce134239" pod="kube-system/kube-scheduler-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581473    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7e2543012fea4ce03318a1c0f6aaee0f" pod="kube-system/etcd-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581701    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="14d35a07742096b90a0b5fc6b9c36b99" pod="kube-system/kube-apiserver-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.581910    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="34d854e664c3ec1174aaf6b825d460d3" pod="kube-system/kube-controller-manager-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: I1222 01:14:55.584161    1336 scope.go:117] "RemoveContainer" containerID="75d17f9897ffa852e837ec4e818f8332363f49bb1cf5ffb051be8ec1ce7dcbb9"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.584576    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7e2543012fea4ce03318a1c0f6aaee0f" pod="kube-system/etcd-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.584752    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="14d35a07742096b90a0b5fc6b9c36b99" pod="kube-system/kube-apiserver-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.584912    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="34d854e664c3ec1174aaf6b825d460d3" pod="kube-system/kube-controller-manager-pause-583340"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.585065    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-tw999\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1f4faa7e-1c28-47ad-997f-e0f221fb7da9" pod="kube-system/kindnet-tw999"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.588321    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtzct\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7b1b5156-bdf0-4050-bbde-959b400ff9c7" pod="kube-system/kube-proxy-wtzct"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.588582    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q9vtv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c0f72789-c82a-4c15-9a16-b5fe1719ee81" pod="kube-system/coredns-66bc5c9577-q9vtv"
	Dec 22 01:14:55 pause-583340 kubelet[1336]: E1222 01:14:55.588809    1336 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-583340\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c678e99cd3083231a93cbd7ce134239" pod="kube-system/kube-scheduler-pause-583340"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.271972    1336 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-583340\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.272206    1336 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-583340\" is forbidden: User \"system:node:pause-583340\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" podUID="34d854e664c3ec1174aaf6b825d460d3" pod="kube-system/kube-controller-manager-pause-583340"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.323713    1336 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-tw999\" is forbidden: User \"system:node:pause-583340\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" podUID="1f4faa7e-1c28-47ad-997f-e0f221fb7da9" pod="kube-system/kindnet-tw999"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.354372    1336 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-wtzct\" is forbidden: User \"system:node:pause-583340\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" podUID="7b1b5156-bdf0-4050-bbde-959b400ff9c7" pod="kube-system/kube-proxy-wtzct"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: E1222 01:15:02.405304    1336 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-q9vtv\" is forbidden: User \"system:node:pause-583340\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-583340' and this object" podUID="c0f72789-c82a-4c15-9a16-b5fe1719ee81" pod="kube-system/coredns-66bc5c9577-q9vtv"
	Dec 22 01:15:02 pause-583340 kubelet[1336]: W1222 01:15:02.569355    1336 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 22 01:15:16 pause-583340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 22 01:15:16 pause-583340 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 22 01:15:16 pause-583340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-583340 -n pause-583340
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-583340 -n pause-583340: exit status 2 (466.453254ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-583340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (9.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.898495ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:32:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-535197 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-535197 describe deploy/metrics-server -n kube-system: exit status 1 (83.76655ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-535197 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-535197
helpers_test.go:244: (dbg) docker inspect old-k8s-version-535197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1",
	        "Created": "2025-12-22T01:31:08.113695685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2107561,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:31:08.185050171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1-json.log",
	        "Name": "/old-k8s-version-535197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-535197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-535197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1",
	                "LowerDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-535197",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-535197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-535197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-535197",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-535197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17ec8d19e5477116cad50f9710e0c597474224b1f91f7fd0c4ae6848bdd1e561",
	            "SandboxKey": "/var/run/docker/netns/17ec8d19e547",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38741"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38742"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38745"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38743"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38744"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-535197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:f0:6f:ef:1b:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5aea71c8f08ab22f4aef4ac9bf8714a6f83176256b6002b76a15250fc2c9efb0",
	                    "EndpointID": "f0aec60bc7c1e861a3626654787465c44fe09a626c3f18c7c07bc0b813978d89",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-535197",
	                        "d786ec4bcd2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-535197 -n old-k8s-version-535197
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-535197 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-535197 logs -n 25: (1.173049928s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-678854 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-678854             │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ ssh     │ -p cilium-678854 sudo containerd config dump                                                                                                                                                                                                  │ cilium-678854             │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ ssh     │ -p cilium-678854 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-678854             │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ ssh     │ -p cilium-678854 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-678854             │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ ssh     │ -p cilium-678854 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-678854             │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ ssh     │ -p cilium-678854 sudo crio config                                                                                                                                                                                                             │ cilium-678854             │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ delete  │ -p cilium-678854                                                                                                                                                                                                                              │ cilium-678854             │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ delete  │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ ssh     │ -p NoKubernetes-346588 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │                     │
	│ stop    │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p NoKubernetes-346588 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ ssh     │ -p NoKubernetes-346588 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │                     │
	│ delete  │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p force-systemd-flag-314223 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ force-systemd-flag-314223 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p force-systemd-flag-314223                                                                                                                                                                                                                  │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ start   │ -p cert-options-433911 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ cert-options-433911 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ -p cert-options-433911 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p cert-options-433911                                                                                                                                                                                                                        │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:31 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:31 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:31:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:31:02.150875 2107164 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:31:02.150986 2107164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:31:02.151002 2107164 out.go:374] Setting ErrFile to fd 2...
	I1222 01:31:02.151008 2107164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:31:02.151305 2107164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:31:02.151717 2107164 out.go:368] Setting JSON to false
	I1222 01:31:02.152776 2107164 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116012,"bootTime":1766251051,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:31:02.152846 2107164 start.go:143] virtualization:  
	I1222 01:31:02.159056 2107164 out.go:179] * [old-k8s-version-535197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:31:02.162513 2107164 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:31:02.162649 2107164 notify.go:221] Checking for updates...
	I1222 01:31:02.169174 2107164 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:31:02.172290 2107164 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:31:02.175399 2107164 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:31:02.178569 2107164 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:31:02.181520 2107164 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:31:02.184886 2107164 config.go:182] Loaded profile config "cert-expiration-031738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:31:02.185035 2107164 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:31:02.219944 2107164 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:31:02.220073 2107164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:31:02.274454 2107164 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:31:02.264851776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:31:02.274559 2107164 docker.go:319] overlay module found
	I1222 01:31:02.277852 2107164 out.go:179] * Using the docker driver based on user configuration
	I1222 01:31:02.280797 2107164 start.go:309] selected driver: docker
	I1222 01:31:02.280816 2107164 start.go:928] validating driver "docker" against <nil>
	I1222 01:31:02.280830 2107164 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:31:02.281653 2107164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:31:02.344359 2107164 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:31:02.33512906 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:31:02.344606 2107164 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:31:02.345558 2107164 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:31:02.348603 2107164 out.go:179] * Using Docker driver with root privileges
	I1222 01:31:02.351517 2107164 cni.go:84] Creating CNI manager for ""
	I1222 01:31:02.351594 2107164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:31:02.351607 2107164 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:31:02.351680 2107164 start.go:353] cluster config:
	{Name:old-k8s-version-535197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-535197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:31:02.354864 2107164 out.go:179] * Starting "old-k8s-version-535197" primary control-plane node in "old-k8s-version-535197" cluster
	I1222 01:31:02.357632 2107164 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:31:02.360594 2107164 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:31:02.363382 2107164 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1222 01:31:02.363429 2107164 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1222 01:31:02.363440 2107164 cache.go:65] Caching tarball of preloaded images
	I1222 01:31:02.363459 2107164 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:31:02.363520 2107164 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:31:02.363530 2107164 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1222 01:31:02.363716 2107164 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/config.json ...
	I1222 01:31:02.363746 2107164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/config.json: {Name:mkb9a46ef46fc3c5c4cc5307242d9f7056d9612c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:02.382567 2107164 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:31:02.382589 2107164 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:31:02.382610 2107164 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:31:02.382646 2107164 start.go:360] acquireMachinesLock for old-k8s-version-535197: {Name:mkecf7d4bc455989d8b7a50c9820d107a1467733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:31:02.382764 2107164 start.go:364] duration metric: took 96.851µs to acquireMachinesLock for "old-k8s-version-535197"
	I1222 01:31:02.382793 2107164 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-535197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-535197 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:31:02.382864 2107164 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:31:02.388073 2107164 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:31:02.388324 2107164 start.go:159] libmachine.API.Create for "old-k8s-version-535197" (driver="docker")
	I1222 01:31:02.388367 2107164 client.go:173] LocalClient.Create starting
	I1222 01:31:02.388449 2107164 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:31:02.388489 2107164 main.go:144] libmachine: Decoding PEM data...
	I1222 01:31:02.388508 2107164 main.go:144] libmachine: Parsing certificate...
	I1222 01:31:02.388563 2107164 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:31:02.388585 2107164 main.go:144] libmachine: Decoding PEM data...
	I1222 01:31:02.388601 2107164 main.go:144] libmachine: Parsing certificate...
	I1222 01:31:02.388974 2107164 cli_runner.go:164] Run: docker network inspect old-k8s-version-535197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:31:02.404983 2107164 cli_runner.go:211] docker network inspect old-k8s-version-535197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:31:02.405092 2107164 network_create.go:284] running [docker network inspect old-k8s-version-535197] to gather additional debugging logs...
	I1222 01:31:02.405112 2107164 cli_runner.go:164] Run: docker network inspect old-k8s-version-535197
	W1222 01:31:02.420850 2107164 cli_runner.go:211] docker network inspect old-k8s-version-535197 returned with exit code 1
	I1222 01:31:02.420880 2107164 network_create.go:287] error running [docker network inspect old-k8s-version-535197]: docker network inspect old-k8s-version-535197: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-535197 not found
	I1222 01:31:02.420893 2107164 network_create.go:289] output of [docker network inspect old-k8s-version-535197]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-535197 not found
	
	** /stderr **
	I1222 01:31:02.420996 2107164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:31:02.437544 2107164 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:31:02.437892 2107164 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:31:02.438240 2107164 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:31:02.438554 2107164 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-231a004dbfc3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:c4:9d:cf:a8:5d} reservation:<nil>}
	I1222 01:31:02.438984 2107164 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6e8e0}
	I1222 01:31:02.439010 2107164 network_create.go:124] attempt to create docker network old-k8s-version-535197 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:31:02.439064 2107164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-535197 old-k8s-version-535197
	I1222 01:31:02.500419 2107164 network_create.go:108] docker network old-k8s-version-535197 192.168.85.0/24 created
	I1222 01:31:02.500449 2107164 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-535197" container
	I1222 01:31:02.500544 2107164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:31:02.516923 2107164 cli_runner.go:164] Run: docker volume create old-k8s-version-535197 --label name.minikube.sigs.k8s.io=old-k8s-version-535197 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:31:02.536020 2107164 oci.go:103] Successfully created a docker volume old-k8s-version-535197
	I1222 01:31:02.536114 2107164 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-535197-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-535197 --entrypoint /usr/bin/test -v old-k8s-version-535197:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:31:03.077166 2107164 oci.go:107] Successfully prepared a docker volume old-k8s-version-535197
	I1222 01:31:03.077230 2107164 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1222 01:31:03.077243 2107164 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:31:03.077314 2107164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-535197:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:31:08.030505 2107164 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-535197:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.953118543s)
	I1222 01:31:08.030536 2107164 kic.go:203] duration metric: took 4.953290033s to extract preloaded images to volume ...
	W1222 01:31:08.030694 2107164 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:31:08.030796 2107164 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:31:08.098564 2107164 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-535197 --name old-k8s-version-535197 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-535197 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-535197 --network old-k8s-version-535197 --ip 192.168.85.2 --volume old-k8s-version-535197:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:31:08.432632 2107164 cli_runner.go:164] Run: docker container inspect old-k8s-version-535197 --format={{.State.Running}}
	I1222 01:31:08.462880 2107164 cli_runner.go:164] Run: docker container inspect old-k8s-version-535197 --format={{.State.Status}}
	I1222 01:31:08.488432 2107164 cli_runner.go:164] Run: docker exec old-k8s-version-535197 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:31:08.557920 2107164 oci.go:144] the created container "old-k8s-version-535197" has a running status.
	I1222 01:31:08.557957 2107164 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa...
	I1222 01:31:09.355861 2107164 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:31:09.378722 2107164 cli_runner.go:164] Run: docker container inspect old-k8s-version-535197 --format={{.State.Status}}
	I1222 01:31:09.400587 2107164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:31:09.400618 2107164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-535197 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:31:09.438989 2107164 cli_runner.go:164] Run: docker container inspect old-k8s-version-535197 --format={{.State.Status}}
	I1222 01:31:09.456974 2107164 machine.go:94] provisionDockerMachine start ...
	I1222 01:31:09.457065 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:09.473582 2107164 main.go:144] libmachine: Using SSH client type: native
	I1222 01:31:09.474206 2107164 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38741 <nil> <nil>}
	I1222 01:31:09.474226 2107164 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:31:09.474842 2107164 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:31:12.613315 2107164 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-535197
	
	I1222 01:31:12.613367 2107164 ubuntu.go:182] provisioning hostname "old-k8s-version-535197"
	I1222 01:31:12.613447 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:12.632257 2107164 main.go:144] libmachine: Using SSH client type: native
	I1222 01:31:12.632593 2107164 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38741 <nil> <nil>}
	I1222 01:31:12.632612 2107164 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-535197 && echo "old-k8s-version-535197" | sudo tee /etc/hostname
	I1222 01:31:12.779615 2107164 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-535197
	
	I1222 01:31:12.779691 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:12.800191 2107164 main.go:144] libmachine: Using SSH client type: native
	I1222 01:31:12.800580 2107164 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38741 <nil> <nil>}
	I1222 01:31:12.800603 2107164 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-535197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-535197/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-535197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:31:12.941726 2107164 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:31:12.941752 2107164 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:31:12.941771 2107164 ubuntu.go:190] setting up certificates
	I1222 01:31:12.941790 2107164 provision.go:84] configureAuth start
	I1222 01:31:12.941849 2107164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-535197
	I1222 01:31:12.960037 2107164 provision.go:143] copyHostCerts
	I1222 01:31:12.960103 2107164 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:31:12.960115 2107164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:31:12.960198 2107164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:31:12.960304 2107164 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:31:12.960322 2107164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:31:12.960349 2107164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:31:12.960409 2107164 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:31:12.960419 2107164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:31:12.960444 2107164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:31:12.960495 2107164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-535197 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-535197]
	I1222 01:31:13.538078 2107164 provision.go:177] copyRemoteCerts
	I1222 01:31:13.538155 2107164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:31:13.538198 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:13.555417 2107164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38741 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa Username:docker}
	I1222 01:31:13.653621 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:31:13.671104 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1222 01:31:13.691390 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:31:13.708857 2107164 provision.go:87] duration metric: took 767.044205ms to configureAuth
	I1222 01:31:13.708883 2107164 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:31:13.709077 2107164 config.go:182] Loaded profile config "old-k8s-version-535197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1222 01:31:13.709186 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:13.725857 2107164 main.go:144] libmachine: Using SSH client type: native
	I1222 01:31:13.726180 2107164 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38741 <nil> <nil>}
	I1222 01:31:13.726201 2107164 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:31:14.022438 2107164 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:31:14.022472 2107164 machine.go:97] duration metric: took 4.565464679s to provisionDockerMachine
	I1222 01:31:14.022488 2107164 client.go:176] duration metric: took 11.634110504s to LocalClient.Create
	I1222 01:31:14.022503 2107164 start.go:167] duration metric: took 11.634186267s to libmachine.API.Create "old-k8s-version-535197"
	I1222 01:31:14.022514 2107164 start.go:293] postStartSetup for "old-k8s-version-535197" (driver="docker")
	I1222 01:31:14.022525 2107164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:31:14.022595 2107164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:31:14.022635 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:14.041825 2107164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38741 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa Username:docker}
	I1222 01:31:14.141614 2107164 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:31:14.145073 2107164 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:31:14.145100 2107164 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:31:14.145113 2107164 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:31:14.145167 2107164 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:31:14.145249 2107164 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:31:14.145375 2107164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:31:14.153079 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:31:14.171405 2107164 start.go:296] duration metric: took 148.877229ms for postStartSetup
	I1222 01:31:14.171824 2107164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-535197
	I1222 01:31:14.190132 2107164 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/config.json ...
	I1222 01:31:14.190451 2107164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:31:14.190500 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:14.208496 2107164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38741 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa Username:docker}
	I1222 01:31:14.306481 2107164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:31:14.311037 2107164 start.go:128] duration metric: took 11.928158389s to createHost
	I1222 01:31:14.311060 2107164 start.go:83] releasing machines lock for "old-k8s-version-535197", held for 11.928284605s
	I1222 01:31:14.311143 2107164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-535197
	I1222 01:31:14.327641 2107164 ssh_runner.go:195] Run: cat /version.json
	I1222 01:31:14.327696 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:14.327942 2107164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:31:14.328009 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:14.346617 2107164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38741 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa Username:docker}
	I1222 01:31:14.347267 2107164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38741 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa Username:docker}
	I1222 01:31:14.441016 2107164 ssh_runner.go:195] Run: systemctl --version
	I1222 01:31:14.529900 2107164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:31:14.567195 2107164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:31:14.571833 2107164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:31:14.571907 2107164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:31:14.601244 2107164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:31:14.601283 2107164 start.go:496] detecting cgroup driver to use...
	I1222 01:31:14.601317 2107164 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:31:14.601411 2107164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:31:14.619758 2107164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:31:14.632641 2107164 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:31:14.632733 2107164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:31:14.650836 2107164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:31:14.669606 2107164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:31:14.796200 2107164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:31:14.940772 2107164 docker.go:234] disabling docker service ...
	I1222 01:31:14.940885 2107164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:31:14.963622 2107164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:31:14.977508 2107164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:31:15.109525 2107164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:31:15.237714 2107164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:31:15.252577 2107164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:31:15.267154 2107164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1222 01:31:15.267222 2107164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:31:15.276513 2107164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:31:15.276582 2107164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:31:15.285269 2107164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:31:15.294422 2107164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:31:15.303281 2107164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:31:15.312016 2107164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:31:15.320829 2107164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:31:15.334900 2107164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:31:15.343840 2107164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:31:15.351703 2107164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:31:15.359861 2107164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:31:15.482830 2107164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:31:15.672632 2107164 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:31:15.672706 2107164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:31:15.676550 2107164 start.go:564] Will wait 60s for crictl version
	I1222 01:31:15.676628 2107164 ssh_runner.go:195] Run: which crictl
	I1222 01:31:15.680147 2107164 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:31:15.706975 2107164 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:31:15.707068 2107164 ssh_runner.go:195] Run: crio --version
	I1222 01:31:15.738155 2107164 ssh_runner.go:195] Run: crio --version
	I1222 01:31:15.772713 2107164 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1222 01:31:15.775517 2107164 cli_runner.go:164] Run: docker network inspect old-k8s-version-535197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:31:15.790554 2107164 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:31:15.794452 2107164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:31:15.804592 2107164 kubeadm.go:884] updating cluster {Name:old-k8s-version-535197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-535197 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:31:15.804724 2107164 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1222 01:31:15.804787 2107164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:31:15.837932 2107164 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:31:15.837954 2107164 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:31:15.838029 2107164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:31:15.864938 2107164 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:31:15.864959 2107164 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:31:15.864967 2107164 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1222 01:31:15.865061 2107164 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-535197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-535197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:31:15.865142 2107164 ssh_runner.go:195] Run: crio config
	I1222 01:31:15.936899 2107164 cni.go:84] Creating CNI manager for ""
	I1222 01:31:15.936924 2107164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:31:15.936939 2107164 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:31:15.936964 2107164 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-535197 NodeName:old-k8s-version-535197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:31:15.937118 2107164 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-535197"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:31:15.937196 2107164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1222 01:31:15.945628 2107164 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:31:15.945743 2107164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:31:15.953823 2107164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:31:15.966469 2107164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:31:15.980096 2107164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1222 01:31:15.993428 2107164 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:31:15.997119 2107164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:31:16.008539 2107164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:31:16.135560 2107164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:31:16.153808 2107164 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197 for IP: 192.168.85.2
	I1222 01:31:16.153829 2107164 certs.go:195] generating shared ca certs ...
	I1222 01:31:16.153846 2107164 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:16.153986 2107164 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:31:16.154046 2107164 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:31:16.154058 2107164 certs.go:257] generating profile certs ...
	I1222 01:31:16.154113 2107164 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.key
	I1222 01:31:16.154131 2107164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt with IP's: []
	I1222 01:31:16.365662 2107164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt ...
	I1222 01:31:16.365692 2107164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: {Name:mkf440d4e3f86a610188d601fcac7df53945560a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:16.365891 2107164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.key ...
	I1222 01:31:16.365906 2107164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.key: {Name:mk18bd75c68c4a9280283de6e041df47cd16527a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:16.366007 2107164 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.key.456393f7
	I1222 01:31:16.366025 2107164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.crt.456393f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:31:16.495048 2107164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.crt.456393f7 ...
	I1222 01:31:16.495076 2107164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.crt.456393f7: {Name:mk682a0bd2fd2af171390043c2ab99d39b5823c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:16.495243 2107164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.key.456393f7 ...
	I1222 01:31:16.495257 2107164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.key.456393f7: {Name:mk1ce2564984a2cc66b5402983ddded021a6f8e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:16.495337 2107164 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.crt.456393f7 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.crt
	I1222 01:31:16.495429 2107164 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.key.456393f7 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.key
	I1222 01:31:16.495520 2107164 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/proxy-client.key
	I1222 01:31:16.495539 2107164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/proxy-client.crt with IP's: []
	I1222 01:31:16.760784 2107164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/proxy-client.crt ...
	I1222 01:31:16.760814 2107164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/proxy-client.crt: {Name:mk15d57ad9363d96f2eca726b63dea97bea78d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:16.760994 2107164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/proxy-client.key ...
	I1222 01:31:16.761007 2107164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/proxy-client.key: {Name:mk6b268b936ed3fb3ebc9476e837bab12a3dc692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:16.761190 2107164 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:31:16.761236 2107164 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:31:16.761249 2107164 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:31:16.761279 2107164 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:31:16.761313 2107164 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:31:16.761355 2107164 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:31:16.761404 2107164 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:31:16.761975 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:31:16.780067 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:31:16.799543 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:31:16.818966 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:31:16.837443 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1222 01:31:16.864439 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:31:16.884748 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:31:16.905487 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:31:16.926491 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:31:16.956539 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:31:16.976734 2107164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:31:16.995197 2107164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:31:17.009829 2107164 ssh_runner.go:195] Run: openssl version
	I1222 01:31:17.016257 2107164 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:31:17.023940 2107164 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:31:17.031390 2107164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:31:17.035330 2107164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:31:17.035392 2107164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:31:17.078604 2107164 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:31:17.087465 2107164 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:31:17.095118 2107164 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:31:17.104448 2107164 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:31:17.112188 2107164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:31:17.115845 2107164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:31:17.115910 2107164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:31:17.157448 2107164 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:31:17.164742 2107164 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:31:17.172062 2107164 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:31:17.179529 2107164 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:31:17.186891 2107164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:31:17.191837 2107164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:31:17.191909 2107164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:31:17.233749 2107164 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:31:17.241739 2107164 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:31:17.249979 2107164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:31:17.253734 2107164 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:31:17.253786 2107164 kubeadm.go:401] StartCluster: {Name:old-k8s-version-535197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-535197 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:31:17.253857 2107164 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:31:17.253915 2107164 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:31:17.280087 2107164 cri.go:96] found id: ""
	I1222 01:31:17.280171 2107164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:31:17.288277 2107164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:31:17.296029 2107164 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:31:17.296116 2107164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:31:17.303717 2107164 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:31:17.303738 2107164 kubeadm.go:158] found existing configuration files:
	
	I1222 01:31:17.303809 2107164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:31:17.311671 2107164 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:31:17.311736 2107164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:31:17.319216 2107164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:31:17.326885 2107164 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:31:17.326982 2107164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:31:17.334663 2107164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:31:17.342923 2107164 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:31:17.342989 2107164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:31:17.351165 2107164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:31:17.359379 2107164 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:31:17.359443 2107164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:31:17.367402 2107164 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:31:17.417411 2107164 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1222 01:31:17.417473 2107164 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:31:17.454414 2107164 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:31:17.454491 2107164 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:31:17.454533 2107164 kubeadm.go:319] OS: Linux
	I1222 01:31:17.454583 2107164 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:31:17.454635 2107164 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:31:17.454689 2107164 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:31:17.454740 2107164 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:31:17.454790 2107164 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:31:17.454841 2107164 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:31:17.454888 2107164 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:31:17.454940 2107164 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:31:17.454991 2107164 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:31:17.544128 2107164 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:31:17.544258 2107164 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:31:17.544361 2107164 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1222 01:31:17.722628 2107164 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:31:17.727557 2107164 out.go:252]   - Generating certificates and keys ...
	I1222 01:31:17.727688 2107164 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:31:17.727781 2107164 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:31:17.910694 2107164 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:31:18.638775 2107164 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:31:19.134956 2107164 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:31:19.729378 2107164 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:31:20.236025 2107164 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:31:20.236492 2107164 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-535197] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:31:20.513180 2107164 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:31:20.513611 2107164 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-535197] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:31:20.735773 2107164 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:31:20.943251 2107164 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:31:21.609100 2107164 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:31:21.609454 2107164 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:31:22.660273 2107164 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:31:22.820720 2107164 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:31:23.016800 2107164 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:31:23.492637 2107164 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:31:23.493508 2107164 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:31:23.496196 2107164 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:31:23.499678 2107164 out.go:252]   - Booting up control plane ...
	I1222 01:31:23.499779 2107164 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:31:23.499857 2107164 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:31:23.502687 2107164 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:31:23.519260 2107164 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:31:23.520434 2107164 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:31:23.520616 2107164 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:31:23.655642 2107164 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1222 01:31:31.659008 2107164 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.003318 seconds
	I1222 01:31:31.659139 2107164 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 01:31:31.675717 2107164 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 01:31:32.210712 2107164 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 01:31:32.210922 2107164 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-535197 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 01:31:32.725217 2107164 kubeadm.go:319] [bootstrap-token] Using token: 2uwxsf.53yel8nwd8pnpfi5
	I1222 01:31:32.728178 2107164 out.go:252]   - Configuring RBAC rules ...
	I1222 01:31:32.728299 2107164 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 01:31:32.732669 2107164 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 01:31:32.744483 2107164 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 01:31:32.750520 2107164 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 01:31:32.755060 2107164 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 01:31:32.759291 2107164 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 01:31:32.774887 2107164 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 01:31:33.142294 2107164 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 01:31:33.227729 2107164 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 01:31:33.229074 2107164 kubeadm.go:319] 
	I1222 01:31:33.229154 2107164 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 01:31:33.229164 2107164 kubeadm.go:319] 
	I1222 01:31:33.229241 2107164 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 01:31:33.229250 2107164 kubeadm.go:319] 
	I1222 01:31:33.229285 2107164 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 01:31:33.229378 2107164 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 01:31:33.229434 2107164 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 01:31:33.229443 2107164 kubeadm.go:319] 
	I1222 01:31:33.229502 2107164 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 01:31:33.229516 2107164 kubeadm.go:319] 
	I1222 01:31:33.229567 2107164 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 01:31:33.229574 2107164 kubeadm.go:319] 
	I1222 01:31:33.229626 2107164 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 01:31:33.229704 2107164 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 01:31:33.229775 2107164 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 01:31:33.229783 2107164 kubeadm.go:319] 
	I1222 01:31:33.229871 2107164 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 01:31:33.229951 2107164 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 01:31:33.229959 2107164 kubeadm.go:319] 
	I1222 01:31:33.230042 2107164 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2uwxsf.53yel8nwd8pnpfi5 \
	I1222 01:31:33.230149 2107164 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d \
	I1222 01:31:33.230174 2107164 kubeadm.go:319] 	--control-plane 
	I1222 01:31:33.230181 2107164 kubeadm.go:319] 
	I1222 01:31:33.230266 2107164 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 01:31:33.230273 2107164 kubeadm.go:319] 
	I1222 01:31:33.230355 2107164 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2uwxsf.53yel8nwd8pnpfi5 \
	I1222 01:31:33.230457 2107164 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d 
	I1222 01:31:33.232641 2107164 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:31:33.232783 2107164 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:31:33.232809 2107164 cni.go:84] Creating CNI manager for ""
	I1222 01:31:33.232818 2107164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:31:33.236066 2107164 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1222 01:31:33.238910 2107164 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1222 01:31:33.247099 2107164 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1222 01:31:33.247118 2107164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1222 01:31:33.271356 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1222 01:31:34.184722 2107164 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 01:31:34.184858 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:34.184928 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-535197 minikube.k8s.io/updated_at=2025_12_22T01_31_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=old-k8s-version-535197 minikube.k8s.io/primary=true
	I1222 01:31:34.387381 2107164 ops.go:34] apiserver oom_adj: -16
	I1222 01:31:34.387501 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:34.888485 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:35.388059 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:35.887645 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:36.388181 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:36.887644 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:37.388263 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:37.888519 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:38.387644 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:38.887838 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:39.387815 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:39.888359 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:40.388372 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:40.887754 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:41.387886 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:41.887697 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:42.388256 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:42.888421 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:43.388363 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:43.888355 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:44.388135 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:44.888110 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:45.387842 2107164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:31:45.504692 2107164 kubeadm.go:1114] duration metric: took 11.319891341s to wait for elevateKubeSystemPrivileges
	I1222 01:31:45.504720 2107164 kubeadm.go:403] duration metric: took 28.250936684s to StartCluster
	I1222 01:31:45.504736 2107164 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:45.504799 2107164 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:31:45.505722 2107164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:31:45.505936 2107164 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:31:45.506097 2107164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 01:31:45.506351 2107164 config.go:182] Loaded profile config "old-k8s-version-535197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1222 01:31:45.506386 2107164 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:31:45.506445 2107164 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-535197"
	I1222 01:31:45.506458 2107164 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-535197"
	I1222 01:31:45.506479 2107164 host.go:66] Checking if "old-k8s-version-535197" exists ...
	I1222 01:31:45.507069 2107164 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-535197"
	I1222 01:31:45.507088 2107164 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-535197"
	I1222 01:31:45.507342 2107164 cli_runner.go:164] Run: docker container inspect old-k8s-version-535197 --format={{.State.Status}}
	I1222 01:31:45.507559 2107164 cli_runner.go:164] Run: docker container inspect old-k8s-version-535197 --format={{.State.Status}}
	I1222 01:31:45.509049 2107164 out.go:179] * Verifying Kubernetes components...
	I1222 01:31:45.511977 2107164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:31:45.552987 2107164 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-535197"
	I1222 01:31:45.553035 2107164 host.go:66] Checking if "old-k8s-version-535197" exists ...
	I1222 01:31:45.553625 2107164 cli_runner.go:164] Run: docker container inspect old-k8s-version-535197 --format={{.State.Status}}
	I1222 01:31:45.554464 2107164 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:31:45.557577 2107164 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:31:45.557601 2107164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:31:45.557667 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:45.591465 2107164 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:31:45.591491 2107164 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:31:45.591552 2107164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:31:45.606197 2107164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38741 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa Username:docker}
	I1222 01:31:45.636600 2107164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38741 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa Username:docker}
	I1222 01:31:45.840878 2107164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 01:31:45.903727 2107164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:31:45.937830 2107164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:31:45.975628 2107164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:31:46.830749 2107164 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1222 01:31:46.832670 2107164 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-535197" to be "Ready" ...
	I1222 01:31:47.237878 2107164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.262211019s)
	I1222 01:31:47.240865 2107164 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1222 01:31:47.244873 2107164 addons.go:530] duration metric: took 1.738474203s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1222 01:31:47.336421 2107164 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-535197" context rescaled to 1 replicas
	W1222 01:31:48.836033 2107164 node_ready.go:57] node "old-k8s-version-535197" has "Ready":"False" status (will retry)
	W1222 01:31:50.840646 2107164 node_ready.go:57] node "old-k8s-version-535197" has "Ready":"False" status (will retry)
	W1222 01:31:53.336759 2107164 node_ready.go:57] node "old-k8s-version-535197" has "Ready":"False" status (will retry)
	W1222 01:31:55.836214 2107164 node_ready.go:57] node "old-k8s-version-535197" has "Ready":"False" status (will retry)
	W1222 01:31:58.336145 2107164 node_ready.go:57] node "old-k8s-version-535197" has "Ready":"False" status (will retry)
	I1222 01:31:59.836132 2107164 node_ready.go:49] node "old-k8s-version-535197" is "Ready"
	I1222 01:31:59.836166 2107164 node_ready.go:38] duration metric: took 13.003467588s for node "old-k8s-version-535197" to be "Ready" ...
	I1222 01:31:59.836182 2107164 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:31:59.836244 2107164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:31:59.864911 2107164 api_server.go:72] duration metric: took 14.358946645s to wait for apiserver process to appear ...
	I1222 01:31:59.864938 2107164 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:31:59.864958 2107164 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 01:31:59.873528 2107164 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1222 01:31:59.874995 2107164 api_server.go:141] control plane version: v1.28.0
	I1222 01:31:59.875028 2107164 api_server.go:131] duration metric: took 10.082362ms to wait for apiserver health ...
	I1222 01:31:59.875039 2107164 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:31:59.878908 2107164 system_pods.go:59] 8 kube-system pods found
	I1222 01:31:59.878943 2107164 system_pods.go:61] "coredns-5dd5756b68-6hgk8" [f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:31:59.878951 2107164 system_pods.go:61] "etcd-old-k8s-version-535197" [5b73ae66-a1c5-4c7b-b8a5-654306804a94] Running
	I1222 01:31:59.878956 2107164 system_pods.go:61] "kindnet-bf7tv" [d6b07eb1-ed67-41c8-90b1-50f57ab9de7c] Running
	I1222 01:31:59.878960 2107164 system_pods.go:61] "kube-apiserver-old-k8s-version-535197" [ba4de7ac-8e7f-42cd-b7cd-f2b1b0dbfa2b] Running
	I1222 01:31:59.878971 2107164 system_pods.go:61] "kube-controller-manager-old-k8s-version-535197" [9ba6fae4-da1d-4308-af2e-d8f4adcf32ee] Running
	I1222 01:31:59.878975 2107164 system_pods.go:61] "kube-proxy-lfhlv" [35aab6f4-62d9-4eb5-8371-8ce2ce72aac9] Running
	I1222 01:31:59.878979 2107164 system_pods.go:61] "kube-scheduler-old-k8s-version-535197" [56d0fa9f-dbd3-43a2-b8e5-3bfd5a130246] Running
	I1222 01:31:59.878994 2107164 system_pods.go:61] "storage-provisioner" [1933c639-b9fc-40e3-a270-1cf36c9e0e89] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:31:59.879000 2107164 system_pods.go:74] duration metric: took 3.95586ms to wait for pod list to return data ...
	I1222 01:31:59.879010 2107164 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:31:59.881311 2107164 default_sa.go:45] found service account: "default"
	I1222 01:31:59.881370 2107164 default_sa.go:55] duration metric: took 2.354645ms for default service account to be created ...
	I1222 01:31:59.881380 2107164 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:31:59.892351 2107164 system_pods.go:86] 8 kube-system pods found
	I1222 01:31:59.892387 2107164 system_pods.go:89] "coredns-5dd5756b68-6hgk8" [f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:31:59.892394 2107164 system_pods.go:89] "etcd-old-k8s-version-535197" [5b73ae66-a1c5-4c7b-b8a5-654306804a94] Running
	I1222 01:31:59.892400 2107164 system_pods.go:89] "kindnet-bf7tv" [d6b07eb1-ed67-41c8-90b1-50f57ab9de7c] Running
	I1222 01:31:59.892405 2107164 system_pods.go:89] "kube-apiserver-old-k8s-version-535197" [ba4de7ac-8e7f-42cd-b7cd-f2b1b0dbfa2b] Running
	I1222 01:31:59.892410 2107164 system_pods.go:89] "kube-controller-manager-old-k8s-version-535197" [9ba6fae4-da1d-4308-af2e-d8f4adcf32ee] Running
	I1222 01:31:59.892413 2107164 system_pods.go:89] "kube-proxy-lfhlv" [35aab6f4-62d9-4eb5-8371-8ce2ce72aac9] Running
	I1222 01:31:59.892417 2107164 system_pods.go:89] "kube-scheduler-old-k8s-version-535197" [56d0fa9f-dbd3-43a2-b8e5-3bfd5a130246] Running
	I1222 01:31:59.892423 2107164 system_pods.go:89] "storage-provisioner" [1933c639-b9fc-40e3-a270-1cf36c9e0e89] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:31:59.892450 2107164 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1222 01:32:00.213477 2107164 system_pods.go:86] 8 kube-system pods found
	I1222 01:32:00.213520 2107164 system_pods.go:89] "coredns-5dd5756b68-6hgk8" [f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:32:00.213529 2107164 system_pods.go:89] "etcd-old-k8s-version-535197" [5b73ae66-a1c5-4c7b-b8a5-654306804a94] Running
	I1222 01:32:00.213537 2107164 system_pods.go:89] "kindnet-bf7tv" [d6b07eb1-ed67-41c8-90b1-50f57ab9de7c] Running
	I1222 01:32:00.213542 2107164 system_pods.go:89] "kube-apiserver-old-k8s-version-535197" [ba4de7ac-8e7f-42cd-b7cd-f2b1b0dbfa2b] Running
	I1222 01:32:00.213547 2107164 system_pods.go:89] "kube-controller-manager-old-k8s-version-535197" [9ba6fae4-da1d-4308-af2e-d8f4adcf32ee] Running
	I1222 01:32:00.213551 2107164 system_pods.go:89] "kube-proxy-lfhlv" [35aab6f4-62d9-4eb5-8371-8ce2ce72aac9] Running
	I1222 01:32:00.213555 2107164 system_pods.go:89] "kube-scheduler-old-k8s-version-535197" [56d0fa9f-dbd3-43a2-b8e5-3bfd5a130246] Running
	I1222 01:32:00.213592 2107164 system_pods.go:89] "storage-provisioner" [1933c639-b9fc-40e3-a270-1cf36c9e0e89] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:32:00.529075 2107164 system_pods.go:86] 8 kube-system pods found
	I1222 01:32:00.529115 2107164 system_pods.go:89] "coredns-5dd5756b68-6hgk8" [f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:32:00.529123 2107164 system_pods.go:89] "etcd-old-k8s-version-535197" [5b73ae66-a1c5-4c7b-b8a5-654306804a94] Running
	I1222 01:32:00.529130 2107164 system_pods.go:89] "kindnet-bf7tv" [d6b07eb1-ed67-41c8-90b1-50f57ab9de7c] Running
	I1222 01:32:00.529136 2107164 system_pods.go:89] "kube-apiserver-old-k8s-version-535197" [ba4de7ac-8e7f-42cd-b7cd-f2b1b0dbfa2b] Running
	I1222 01:32:00.529146 2107164 system_pods.go:89] "kube-controller-manager-old-k8s-version-535197" [9ba6fae4-da1d-4308-af2e-d8f4adcf32ee] Running
	I1222 01:32:00.529150 2107164 system_pods.go:89] "kube-proxy-lfhlv" [35aab6f4-62d9-4eb5-8371-8ce2ce72aac9] Running
	I1222 01:32:00.529155 2107164 system_pods.go:89] "kube-scheduler-old-k8s-version-535197" [56d0fa9f-dbd3-43a2-b8e5-3bfd5a130246] Running
	I1222 01:32:00.529160 2107164 system_pods.go:89] "storage-provisioner" [1933c639-b9fc-40e3-a270-1cf36c9e0e89] Running
	I1222 01:32:00.529175 2107164 system_pods.go:126] duration metric: took 647.785391ms to wait for k8s-apps to be running ...
	I1222 01:32:00.529194 2107164 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:32:00.529275 2107164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:32:00.546093 2107164 system_svc.go:56] duration metric: took 16.892158ms WaitForService to wait for kubelet
	I1222 01:32:00.546123 2107164 kubeadm.go:587] duration metric: took 15.04016479s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:32:00.546142 2107164 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:32:00.549138 2107164 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:32:00.549170 2107164 node_conditions.go:123] node cpu capacity is 2
	I1222 01:32:00.549184 2107164 node_conditions.go:105] duration metric: took 3.03624ms to run NodePressure ...
	I1222 01:32:00.549196 2107164 start.go:242] waiting for startup goroutines ...
	I1222 01:32:00.549204 2107164 start.go:247] waiting for cluster config update ...
	I1222 01:32:00.549215 2107164 start.go:256] writing updated cluster config ...
	I1222 01:32:00.549863 2107164 ssh_runner.go:195] Run: rm -f paused
	I1222 01:32:00.559315 2107164 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:32:00.564586 2107164 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6hgk8" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:01.575750 2107164 pod_ready.go:94] pod "coredns-5dd5756b68-6hgk8" is "Ready"
	I1222 01:32:01.575782 2107164 pod_ready.go:86] duration metric: took 1.011163367s for pod "coredns-5dd5756b68-6hgk8" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:01.578916 2107164 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:01.583970 2107164 pod_ready.go:94] pod "etcd-old-k8s-version-535197" is "Ready"
	I1222 01:32:01.584040 2107164 pod_ready.go:86] duration metric: took 5.098849ms for pod "etcd-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:01.587052 2107164 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:01.592126 2107164 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-535197" is "Ready"
	I1222 01:32:01.592153 2107164 pod_ready.go:86] duration metric: took 5.075047ms for pod "kube-apiserver-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:01.595423 2107164 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:01.769038 2107164 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-535197" is "Ready"
	I1222 01:32:01.769070 2107164 pod_ready.go:86] duration metric: took 173.621794ms for pod "kube-controller-manager-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:01.970986 2107164 pod_ready.go:83] waiting for pod "kube-proxy-lfhlv" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:02.368829 2107164 pod_ready.go:94] pod "kube-proxy-lfhlv" is "Ready"
	I1222 01:32:02.368854 2107164 pod_ready.go:86] duration metric: took 397.842632ms for pod "kube-proxy-lfhlv" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:02.569563 2107164 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:02.969044 2107164 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-535197" is "Ready"
	I1222 01:32:02.969086 2107164 pod_ready.go:86] duration metric: took 399.491697ms for pod "kube-scheduler-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:32:02.969101 2107164 pod_ready.go:40] duration metric: took 2.409698044s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:32:03.027391 2107164 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1222 01:32:03.030422 2107164 out.go:203] 
	W1222 01:32:03.033378 2107164 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1222 01:32:03.036339 2107164 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1222 01:32:03.039275 2107164 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-535197" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:32:00 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:00.193689304Z" level=info msg="Created container 81a7bb5ee535938123a963a8d35b97fc11c19610ab9ce0259c09fc3183042d42: kube-system/storage-provisioner/storage-provisioner" id=5bdd99fb-3746-450f-ae89-1a9b16785925 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:32:00 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:00.194833827Z" level=info msg="Starting container: 81a7bb5ee535938123a963a8d35b97fc11c19610ab9ce0259c09fc3183042d42" id=73ed7000-5885-43d7-bc7f-e201aa9cc708 name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:32:00 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:00.231991111Z" level=info msg="Started container" PID=1922 containerID=81a7bb5ee535938123a963a8d35b97fc11c19610ab9ce0259c09fc3183042d42 description=kube-system/storage-provisioner/storage-provisioner id=73ed7000-5885-43d7-bc7f-e201aa9cc708 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a405875a23d94fd93ac9eef97d7ddc8e3823992b435e36e152ac37cf9d75decb
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.566087507Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cdb10191-2ec3-4337-be66-7f763075765b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.566175217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.578056195Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:861781aced72dc35c36c7fd2b16c984141cf32ef9f70ae86aedf2d51d60b46ba UID:873a5db1-c241-4c03-8906-ba8ace1623cf NetNS:/var/run/netns/9c67dd16-c765-4268-981a-f0647b6379bd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400136c2e0}] Aliases:map[]}"
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.578268341Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.5890343Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:861781aced72dc35c36c7fd2b16c984141cf32ef9f70ae86aedf2d51d60b46ba UID:873a5db1-c241-4c03-8906-ba8ace1623cf NetNS:/var/run/netns/9c67dd16-c765-4268-981a-f0647b6379bd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400136c2e0}] Aliases:map[]}"
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.589638787Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.595067423Z" level=info msg="Ran pod sandbox 861781aced72dc35c36c7fd2b16c984141cf32ef9f70ae86aedf2d51d60b46ba with infra container: default/busybox/POD" id=cdb10191-2ec3-4337-be66-7f763075765b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.596284477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e6e6564d-fca5-44a1-89a3-4b7ab4278b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.596596509Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e6e6564d-fca5-44a1-89a3-4b7ab4278b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.596723233Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e6e6564d-fca5-44a1-89a3-4b7ab4278b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.597778725Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=534a5efa-c817-47f0-b1a5-9334ba4aae59 name=/runtime.v1.ImageService/PullImage
	Dec 22 01:32:03 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:03.600325684Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.569445954Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=534a5efa-c817-47f0-b1a5-9334ba4aae59 name=/runtime.v1.ImageService/PullImage
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.572526681Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=149a6e2c-02c8-4db7-a95f-3fbac2db44ed name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.575120457Z" level=info msg="Creating container: default/busybox/busybox" id=229f1320-71e2-4b3a-a7d2-2a1d8e9b4338 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.575247986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.580075688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.580552367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.597604791Z" level=info msg="Created container be96b88467b2859dcc97e834d4e5e1e7a121385add9bac2d6e4aeebe40e14f7f: default/busybox/busybox" id=229f1320-71e2-4b3a-a7d2-2a1d8e9b4338 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.598357777Z" level=info msg="Starting container: be96b88467b2859dcc97e834d4e5e1e7a121385add9bac2d6e4aeebe40e14f7f" id=23b478d0-9265-470b-929e-3bf7f98c3f5e name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:32:05 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:05.60017972Z" level=info msg="Started container" PID=1986 containerID=be96b88467b2859dcc97e834d4e5e1e7a121385add9bac2d6e4aeebe40e14f7f description=default/busybox/busybox id=23b478d0-9265-470b-929e-3bf7f98c3f5e name=/runtime.v1.RuntimeService/StartContainer sandboxID=861781aced72dc35c36c7fd2b16c984141cf32ef9f70ae86aedf2d51d60b46ba
	Dec 22 01:32:12 old-k8s-version-535197 crio[841]: time="2025-12-22T01:32:12.415577616Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	be96b88467b28       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   861781aced72d       busybox                                          default
	8f23fcba85406       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   3305b6e15c772       coredns-5dd5756b68-6hgk8                         kube-system
	81a7bb5ee5359       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   a405875a23d94       storage-provisioner                              kube-system
	a82a20b4e9503       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   96ce2aec2d65a       kindnet-bf7tv                                    kube-system
	25409ab7fcc0f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   c7e3235117ec7       kube-proxy-lfhlv                                 kube-system
	a94d4bdc34ab7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   bccd9a38443bb       kube-scheduler-old-k8s-version-535197            kube-system
	292274545d1fa       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   2ce27b2511f3b       kube-controller-manager-old-k8s-version-535197   kube-system
	d98a786aa62bc       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   e45fb8c1ed108       kube-apiserver-old-k8s-version-535197            kube-system
	39bcf524222d4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   a5b5127487496       etcd-old-k8s-version-535197                      kube-system
	
	
	==> coredns [8f23fcba85406cf20f9d91b6b1cd1a36a5297552c77721f3ae6b4521e4a7772e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38172 - 40968 "HINFO IN 5824499904699861147.7843645569508297425. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023312443s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-535197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-535197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=old-k8s-version-535197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_31_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:31:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-535197
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:32:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:32:03 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:32:03 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:32:03 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:32:03 +0000   Mon, 22 Dec 2025 01:31:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-535197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                7cd106b7-7f02-4dbd-b127-b7707e12d39f
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-6hgk8                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-535197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-bf7tv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-535197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-535197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-lfhlv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-535197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-535197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-535197 event: Registered Node old-k8s-version-535197 in Controller
	  Normal  NodeReady                15s   kubelet          Node old-k8s-version-535197 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec22 00:58] overlayfs: idmapped layers are currently not supported
	[Dec22 00:59] overlayfs: idmapped layers are currently not supported
	[Dec22 01:00] overlayfs: idmapped layers are currently not supported
	[Dec22 01:04] overlayfs: idmapped layers are currently not supported
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [39bcf524222d4bf84a220aadffcfe24c25e68435a42603278863f0e8f551404a] <==
	{"level":"info","ts":"2025-12-22T01:31:25.954047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-22T01:31:25.954193Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-22T01:31:25.957068Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-22T01:31:25.957198Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-22T01:31:25.957348Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-22T01:31:25.961914Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-22T01:31:25.961993Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-22T01:31:26.133374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-22T01:31:26.133487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-22T01:31:26.133524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-22T01:31:26.133573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-22T01:31:26.133604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-22T01:31:26.133645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-22T01:31:26.133679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-22T01:31:26.136143Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-22T01:31:26.137528Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-535197 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-22T01:31:26.137599Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-22T01:31:26.139149Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-22T01:31:26.139281Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-22T01:31:26.140148Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-22T01:31:26.145564Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-22T01:31:26.145727Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-22T01:31:26.145789Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-22T01:31:26.146173Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-22T01:31:26.146223Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:32:14 up 1 day,  8:14,  0 user,  load average: 2.47, 2.75, 2.28
	Linux old-k8s-version-535197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a82a20b4e9503368f40465796270b4e8fa57a34d63f3e70a07cd469c9b247b33] <==
	I1222 01:31:49.122341       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:31:49.122700       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:31:49.122877       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:31:49.122897       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:31:49.122907       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:31:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:31:49.324755       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:31:49.324880       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:31:49.324919       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:31:49.326070       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1222 01:31:49.525402       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:31:49.525428       1 metrics.go:72] Registering metrics
	I1222 01:31:49.525485       1 controller.go:711] "Syncing nftables rules"
	I1222 01:31:59.329471       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:31:59.329541       1 main.go:301] handling current node
	I1222 01:32:09.325962       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:32:09.326001       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d98a786aa62bc5342354592cf57690f5cc089460c0c56331e7ab0f309d44b701] <==
	I1222 01:31:30.025134       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1222 01:31:30.026225       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1222 01:31:30.026825       1 aggregator.go:166] initial CRD sync complete...
	I1222 01:31:30.027484       1 autoregister_controller.go:141] Starting autoregister controller
	I1222 01:31:30.027579       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1222 01:31:30.027612       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:31:30.033793       1 shared_informer.go:318] Caches are synced for configmaps
	I1222 01:31:30.034662       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1222 01:31:30.034776       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1222 01:31:30.068202       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:31:30.820530       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1222 01:31:30.825803       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1222 01:31:30.825883       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:31:31.409616       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:31:31.451850       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:31:31.547875       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1222 01:31:31.557044       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1222 01:31:31.558152       1 controller.go:624] quota admission added evaluator for: endpoints
	I1222 01:31:31.562711       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:31:31.888966       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1222 01:31:33.117625       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1222 01:31:33.139902       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1222 01:31:33.178730       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1222 01:31:44.869195       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1222 01:31:45.663017       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [292274545d1fad1ec749525def706e95d594dcf78d1626a2a0aef886a46c193e] <==
	I1222 01:31:44.899676       1 shared_informer.go:318] Caches are synced for resource quota
	I1222 01:31:44.911744       1 shared_informer.go:318] Caches are synced for attach detach
	I1222 01:31:44.939786       1 shared_informer.go:318] Caches are synced for persistent volume
	I1222 01:31:45.349555       1 shared_informer.go:318] Caches are synced for garbage collector
	I1222 01:31:45.349584       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1222 01:31:45.366743       1 shared_informer.go:318] Caches are synced for garbage collector
	I1222 01:31:45.674320       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1222 01:31:45.832407       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pgfpm"
	I1222 01:31:45.855683       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6hgk8"
	I1222 01:31:45.916104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="241.534037ms"
	I1222 01:31:45.935271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.116099ms"
	I1222 01:31:45.962349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.032528ms"
	I1222 01:31:45.962451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.954µs"
	I1222 01:31:46.916223       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1222 01:31:46.957885       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pgfpm"
	I1222 01:31:46.973583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.766354ms"
	I1222 01:31:47.005613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.982007ms"
	I1222 01:31:47.005695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.817µs"
	I1222 01:31:47.005794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.344µs"
	I1222 01:31:59.656708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.25µs"
	I1222 01:31:59.677819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.134µs"
	I1222 01:31:59.792170       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1222 01:32:00.413150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.469µs"
	I1222 01:32:01.422344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.787524ms"
	I1222 01:32:01.422474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.293µs"
	
	
	==> kube-proxy [25409ab7fcc0fbf6d2d62d39af2548e57ae43efa0d37d573c5cc210477785b26] <==
	I1222 01:31:46.094637       1 server_others.go:69] "Using iptables proxy"
	I1222 01:31:46.113439       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1222 01:31:46.188248       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:31:46.193575       1 server_others.go:152] "Using iptables Proxier"
	I1222 01:31:46.193612       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1222 01:31:46.193620       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1222 01:31:46.193653       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1222 01:31:46.193856       1 server.go:846] "Version info" version="v1.28.0"
	I1222 01:31:46.193866       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:31:46.198267       1 config.go:188] "Starting service config controller"
	I1222 01:31:46.198284       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1222 01:31:46.198313       1 config.go:97] "Starting endpoint slice config controller"
	I1222 01:31:46.198318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1222 01:31:46.198703       1 config.go:315] "Starting node config controller"
	I1222 01:31:46.198710       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1222 01:31:46.298885       1 shared_informer.go:318] Caches are synced for node config
	I1222 01:31:46.298923       1 shared_informer.go:318] Caches are synced for service config
	I1222 01:31:46.298948       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a94d4bdc34ab754cb6ed662326a7afb62532f25838fb1da0797a0756917ec5f4] <==
	W1222 01:31:30.624711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1222 01:31:30.624736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1222 01:31:30.629186       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1222 01:31:30.629280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1222 01:31:30.629516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1222 01:31:30.629605       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1222 01:31:30.629747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1222 01:31:30.629798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1222 01:31:30.629905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1222 01:31:30.629955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1222 01:31:30.629980       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1222 01:31:30.630013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1222 01:31:30.630067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1222 01:31:30.630085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1222 01:31:30.630164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1222 01:31:30.630211       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1222 01:31:30.630231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1222 01:31:30.630212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1222 01:31:30.630166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1222 01:31:30.630266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1222 01:31:30.630345       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1222 01:31:30.630398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1222 01:31:30.630366       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1222 01:31:30.630486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1222 01:31:31.715163       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: I1222 01:31:45.082870    1374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6b07eb1-ed67-41c8-90b1-50f57ab9de7c-cni-cfg\") pod \"kindnet-bf7tv\" (UID: \"d6b07eb1-ed67-41c8-90b1-50f57ab9de7c\") " pod="kube-system/kindnet-bf7tv"
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: I1222 01:31:45.082909    1374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35aab6f4-62d9-4eb5-8371-8ce2ce72aac9-xtables-lock\") pod \"kube-proxy-lfhlv\" (UID: \"35aab6f4-62d9-4eb5-8371-8ce2ce72aac9\") " pod="kube-system/kube-proxy-lfhlv"
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: I1222 01:31:45.082969    1374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35aab6f4-62d9-4eb5-8371-8ce2ce72aac9-lib-modules\") pod \"kube-proxy-lfhlv\" (UID: \"35aab6f4-62d9-4eb5-8371-8ce2ce72aac9\") " pod="kube-system/kube-proxy-lfhlv"
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: E1222 01:31:45.207417    1374 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: E1222 01:31:45.207785    1374 projected.go:198] Error preparing data for projected volume kube-api-access-25nbn for pod kube-system/kindnet-bf7tv: configmap "kube-root-ca.crt" not found
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: E1222 01:31:45.208005    1374 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6b07eb1-ed67-41c8-90b1-50f57ab9de7c-kube-api-access-25nbn podName:d6b07eb1-ed67-41c8-90b1-50f57ab9de7c nodeName:}" failed. No retries permitted until 2025-12-22 01:31:45.707975569 +0000 UTC m=+12.683152143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25nbn" (UniqueName: "kubernetes.io/projected/d6b07eb1-ed67-41c8-90b1-50f57ab9de7c-kube-api-access-25nbn") pod "kindnet-bf7tv" (UID: "d6b07eb1-ed67-41c8-90b1-50f57ab9de7c") : configmap "kube-root-ca.crt" not found
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: E1222 01:31:45.213776    1374 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: E1222 01:31:45.214226    1374 projected.go:198] Error preparing data for projected volume kube-api-access-56z8s for pod kube-system/kube-proxy-lfhlv: configmap "kube-root-ca.crt" not found
	Dec 22 01:31:45 old-k8s-version-535197 kubelet[1374]: E1222 01:31:45.214710    1374 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35aab6f4-62d9-4eb5-8371-8ce2ce72aac9-kube-api-access-56z8s podName:35aab6f4-62d9-4eb5-8371-8ce2ce72aac9 nodeName:}" failed. No retries permitted until 2025-12-22 01:31:45.714587324 +0000 UTC m=+12.689763898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-56z8s" (UniqueName: "kubernetes.io/projected/35aab6f4-62d9-4eb5-8371-8ce2ce72aac9-kube-api-access-56z8s") pod "kube-proxy-lfhlv" (UID: "35aab6f4-62d9-4eb5-8371-8ce2ce72aac9") : configmap "kube-root-ca.crt" not found
	Dec 22 01:31:49 old-k8s-version-535197 kubelet[1374]: I1222 01:31:49.361523    1374 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lfhlv" podStartSLOduration=5.361472697 podCreationTimestamp="2025-12-22 01:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:31:46.352249075 +0000 UTC m=+13.327425657" watchObservedRunningTime="2025-12-22 01:31:49.361472697 +0000 UTC m=+16.336649288"
	Dec 22 01:31:53 old-k8s-version-535197 kubelet[1374]: I1222 01:31:53.213148    1374 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-bf7tv" podStartSLOduration=6.139611423 podCreationTimestamp="2025-12-22 01:31:44 +0000 UTC" firstStartedPulling="2025-12-22 01:31:45.878999362 +0000 UTC m=+12.854175936" lastFinishedPulling="2025-12-22 01:31:48.95249355 +0000 UTC m=+15.927670132" observedRunningTime="2025-12-22 01:31:49.362093938 +0000 UTC m=+16.337270520" watchObservedRunningTime="2025-12-22 01:31:53.213105619 +0000 UTC m=+20.188282192"
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: I1222 01:31:59.625736    1374 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: I1222 01:31:59.655772    1374 topology_manager.go:215] "Topology Admit Handler" podUID="f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d" podNamespace="kube-system" podName="coredns-5dd5756b68-6hgk8"
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: I1222 01:31:59.662504    1374 topology_manager.go:215] "Topology Admit Handler" podUID="1933c639-b9fc-40e3-a270-1cf36c9e0e89" podNamespace="kube-system" podName="storage-provisioner"
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: I1222 01:31:59.795796    1374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9bz6\" (UniqueName: \"kubernetes.io/projected/1933c639-b9fc-40e3-a270-1cf36c9e0e89-kube-api-access-l9bz6\") pod \"storage-provisioner\" (UID: \"1933c639-b9fc-40e3-a270-1cf36c9e0e89\") " pod="kube-system/storage-provisioner"
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: I1222 01:31:59.796080    1374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d-config-volume\") pod \"coredns-5dd5756b68-6hgk8\" (UID: \"f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d\") " pod="kube-system/coredns-5dd5756b68-6hgk8"
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: I1222 01:31:59.796131    1374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhwd\" (UniqueName: \"kubernetes.io/projected/f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d-kube-api-access-2mhwd\") pod \"coredns-5dd5756b68-6hgk8\" (UID: \"f3fb9fa7-aaf8-45e9-8efd-08b4cb3b015d\") " pod="kube-system/coredns-5dd5756b68-6hgk8"
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: I1222 01:31:59.796163    1374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1933c639-b9fc-40e3-a270-1cf36c9e0e89-tmp\") pod \"storage-provisioner\" (UID: \"1933c639-b9fc-40e3-a270-1cf36c9e0e89\") " pod="kube-system/storage-provisioner"
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: W1222 01:31:59.976715    1374 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/crio-a405875a23d94fd93ac9eef97d7ddc8e3823992b435e36e152ac37cf9d75decb WatchSource:0}: Error finding container a405875a23d94fd93ac9eef97d7ddc8e3823992b435e36e152ac37cf9d75decb: Status 404 returned error can't find the container with id a405875a23d94fd93ac9eef97d7ddc8e3823992b435e36e152ac37cf9d75decb
	Dec 22 01:31:59 old-k8s-version-535197 kubelet[1374]: W1222 01:31:59.988086    1374 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/crio-3305b6e15c772d11763eef6e209f80a9be18556f244b38374600a14ce166cd13 WatchSource:0}: Error finding container 3305b6e15c772d11763eef6e209f80a9be18556f244b38374600a14ce166cd13: Status 404 returned error can't find the container with id 3305b6e15c772d11763eef6e209f80a9be18556f244b38374600a14ce166cd13
	Dec 22 01:32:00 old-k8s-version-535197 kubelet[1374]: I1222 01:32:00.455803    1374 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6hgk8" podStartSLOduration=15.455690415 podCreationTimestamp="2025-12-22 01:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:32:00.412632024 +0000 UTC m=+27.387808598" watchObservedRunningTime="2025-12-22 01:32:00.455690415 +0000 UTC m=+27.430866997"
	Dec 22 01:32:01 old-k8s-version-535197 kubelet[1374]: I1222 01:32:01.407241    1374 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.407199339 podCreationTimestamp="2025-12-22 01:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:32:00.456859938 +0000 UTC m=+27.432036520" watchObservedRunningTime="2025-12-22 01:32:01.407199339 +0000 UTC m=+28.382375921"
	Dec 22 01:32:03 old-k8s-version-535197 kubelet[1374]: I1222 01:32:03.264703    1374 topology_manager.go:215] "Topology Admit Handler" podUID="873a5db1-c241-4c03-8906-ba8ace1623cf" podNamespace="default" podName="busybox"
	Dec 22 01:32:03 old-k8s-version-535197 kubelet[1374]: I1222 01:32:03.422926    1374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sldc\" (UniqueName: \"kubernetes.io/projected/873a5db1-c241-4c03-8906-ba8ace1623cf-kube-api-access-6sldc\") pod \"busybox\" (UID: \"873a5db1-c241-4c03-8906-ba8ace1623cf\") " pod="default/busybox"
	Dec 22 01:32:03 old-k8s-version-535197 kubelet[1374]: W1222 01:32:03.591822    1374 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/crio-861781aced72dc35c36c7fd2b16c984141cf32ef9f70ae86aedf2d51d60b46ba WatchSource:0}: Error finding container 861781aced72dc35c36c7fd2b16c984141cf32ef9f70ae86aedf2d51d60b46ba: Status 404 returned error can't find the container with id 861781aced72dc35c36c7fd2b16c984141cf32ef9f70ae86aedf2d51d60b46ba
	
	
	==> storage-provisioner [81a7bb5ee535938123a963a8d35b97fc11c19610ab9ce0259c09fc3183042d42] <==
	I1222 01:32:00.254184       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:32:00.272774       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:32:00.272836       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1222 01:32:00.301409       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:32:00.308465       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-535197_cd2b0d64-b459-4e4e-b991-cbb5ebc3d0ff!
	I1222 01:32:00.309687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cce17c97-ab91-4b3b-aab2-fd0b22a9040e", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-535197_cd2b0d64-b459-4e4e-b991-cbb5ebc3d0ff became leader
	I1222 01:32:00.408834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-535197_cd2b0d64-b459-4e4e-b991-cbb5ebc3d0ff!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-535197 -n old-k8s-version-535197
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-535197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (515.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m33.667427838s)

                                                
                                                
-- stdout --
	* [no-preload-491054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-491054" primary control-plane node in "no-preload-491054" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:32:54.033494 2113859 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:32:54.033719 2113859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:32:54.033749 2113859 out.go:374] Setting ErrFile to fd 2...
	I1222 01:32:54.033771 2113859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:32:54.034085 2113859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:32:54.034535 2113859 out.go:368] Setting JSON to false
	I1222 01:32:54.035756 2113859 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116123,"bootTime":1766251051,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:32:54.035869 2113859 start.go:143] virtualization:  
	I1222 01:32:54.039623 2113859 out.go:179] * [no-preload-491054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:32:54.042831 2113859 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:32:54.042915 2113859 notify.go:221] Checking for updates...
	I1222 01:32:54.049207 2113859 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:32:54.052271 2113859 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:32:54.055189 2113859 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:32:54.058044 2113859 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:32:54.060848 2113859 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:32:54.064390 2113859 config.go:182] Loaded profile config "old-k8s-version-535197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1222 01:32:54.064526 2113859 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:32:54.095516 2113859 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:32:54.095679 2113859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:32:54.165511 2113859 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:32:54.155633193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:32:54.165626 2113859 docker.go:319] overlay module found
	I1222 01:32:54.168797 2113859 out.go:179] * Using the docker driver based on user configuration
	I1222 01:32:54.171717 2113859 start.go:309] selected driver: docker
	I1222 01:32:54.171740 2113859 start.go:928] validating driver "docker" against <nil>
	I1222 01:32:54.171755 2113859 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:32:54.172503 2113859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:32:54.241624 2113859 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:32:54.231791653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:32:54.241776 2113859 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:32:54.242087 2113859 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:32:54.245038 2113859 out.go:179] * Using Docker driver with root privileges
	I1222 01:32:54.247987 2113859 cni.go:84] Creating CNI manager for ""
	I1222 01:32:54.248055 2113859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:32:54.248070 2113859 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:32:54.248154 2113859 start.go:353] cluster config:
	{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:32:54.251532 2113859 out.go:179] * Starting "no-preload-491054" primary control-plane node in "no-preload-491054" cluster
	I1222 01:32:54.254388 2113859 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:32:54.257314 2113859 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:32:54.260261 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:32:54.260351 2113859 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:32:54.260541 2113859 cache.go:107] acquiring lock: {Name:mkce8c58631c5f0d23bf68d20d333345f25cc3e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.260625 2113859 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:32:54.260643 2113859 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.938µs
	I1222 01:32:54.260658 2113859 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:32:54.260673 2113859 cache.go:107] acquiring lock: {Name:mk4201be15d3fc5460d5a3841ee13ef2c95ea347 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.260776 2113859 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:32:54.260964 2113859 cache.go:107] acquiring lock: {Name:mkf2e3bf6867cb15e485fef5b9e233e7559c99a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261043 2113859 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:32:54.261216 2113859 cache.go:107] acquiring lock: {Name:mkb8446b3e0ab7883b46b478905fd798f55f01e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261418 2113859 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:32:54.261616 2113859 cache.go:107] acquiring lock: {Name:mk77482801a8c57f7731abc5d9887216f12ba8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261720 2113859 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:32:54.261881 2113859 cache.go:107] acquiring lock: {Name:mk692f79fde95c3755a4408e63ab002578541c74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261943 2113859 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:32:54.261958 2113859 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 81.179µs
	I1222 01:32:54.261966 2113859 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:32:54.261984 2113859 cache.go:107] acquiring lock: {Name:mkd233ea519aa3a2daed97ba862dcfb34d3be39d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.262064 2113859 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:32:54.262213 2113859 cache.go:107] acquiring lock: {Name:mk309b91ac5e9b8dc5721727680b39f391c5db3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.262321 2113859 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:32:54.263575 2113859 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:32:54.263641 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json: {Name:mk7f1ee0d07d56b850c7f1ba7b969a78d9e732b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:32:54.264812 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:32:54.266342 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:32:54.266586 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:32:54.267191 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:32:54.267585 2113859 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:32:54.267880 2113859 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:32:54.287070 2113859 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:32:54.287091 2113859 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:32:54.287105 2113859 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:32:54.287135 2113859 start.go:360] acquireMachinesLock for no-preload-491054: {Name:mk963c2c890d85a550b13090cd7467d59e3caf6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.287228 2113859 start.go:364] duration metric: took 78.931µs to acquireMachinesLock for "no-preload-491054"
	I1222 01:32:54.287257 2113859 start.go:93] Provisioning new machine with config: &{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:32:54.287327 2113859 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:32:54.290930 2113859 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:32:54.291159 2113859 start.go:159] libmachine.API.Create for "no-preload-491054" (driver="docker")
	I1222 01:32:54.291195 2113859 client.go:173] LocalClient.Create starting
	I1222 01:32:54.291269 2113859 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:32:54.291299 2113859 main.go:144] libmachine: Decoding PEM data...
	I1222 01:32:54.291313 2113859 main.go:144] libmachine: Parsing certificate...
	I1222 01:32:54.291370 2113859 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:32:54.291392 2113859 main.go:144] libmachine: Decoding PEM data...
	I1222 01:32:54.291403 2113859 main.go:144] libmachine: Parsing certificate...
	I1222 01:32:54.291849 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:32:54.316750 2113859 cli_runner.go:211] docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:32:54.316834 2113859 network_create.go:284] running [docker network inspect no-preload-491054] to gather additional debugging logs...
	I1222 01:32:54.316855 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054
	W1222 01:32:54.335649 2113859 cli_runner.go:211] docker network inspect no-preload-491054 returned with exit code 1
	I1222 01:32:54.335682 2113859 network_create.go:287] error running [docker network inspect no-preload-491054]: docker network inspect no-preload-491054: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-491054 not found
	I1222 01:32:54.335696 2113859 network_create.go:289] output of [docker network inspect no-preload-491054]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-491054 not found
	
	** /stderr **
	I1222 01:32:54.335794 2113859 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:32:54.356466 2113859 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:32:54.356865 2113859 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:32:54.357252 2113859 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:32:54.357789 2113859 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a487f0}
	I1222 01:32:54.357815 2113859 network_create.go:124] attempt to create docker network no-preload-491054 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:32:54.357887 2113859 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-491054 no-preload-491054
	I1222 01:32:54.441852 2113859 network_create.go:108] docker network no-preload-491054 192.168.76.0/24 created
	I1222 01:32:54.441881 2113859 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-491054" container
	I1222 01:32:54.441967 2113859 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:32:54.458632 2113859 cli_runner.go:164] Run: docker volume create no-preload-491054 --label name.minikube.sigs.k8s.io=no-preload-491054 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:32:54.478073 2113859 oci.go:103] Successfully created a docker volume no-preload-491054
	I1222 01:32:54.478163 2113859 cli_runner.go:164] Run: docker run --rm --name no-preload-491054-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-491054 --entrypoint /usr/bin/test -v no-preload-491054:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:32:54.618166 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:32:54.624668 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:32:54.627006 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:32:54.628218 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 01:32:54.636973 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:32:54.648281 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:32:55.019288 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:32:55.019318 2113859 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 757.706011ms
	I1222 01:32:55.019334 2113859 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:32:55.181451 2113859 oci.go:107] Successfully prepared a docker volume no-preload-491054
	I1222 01:32:55.181497 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	W1222 01:32:55.181621 2113859 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:32:55.181770 2113859 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:32:55.246556 2113859 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-491054 --name no-preload-491054 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-491054 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-491054 --network no-preload-491054 --ip 192.168.76.2 --volume no-preload-491054:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:32:55.548149 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:32:55.548173 2113859 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.286961516s
	I1222 01:32:55.548187 2113859 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:32:55.637916 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:32:55.637949 2113859 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.376987849s
	I1222 01:32:55.637962 2113859 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:32:55.644590 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:32:55.644659 2113859 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.382447056s
	I1222 01:32:55.644686 2113859 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:32:55.645277 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:32:55.645297 2113859 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 1.383313901s
	I1222 01:32:55.645307 2113859 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:32:55.676721 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Running}}
	I1222 01:32:55.706143 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:55.728682 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:32:55.732697 2113859 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.47201229s
	I1222 01:32:55.733125 2113859 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:32:55.733195 2113859 cache.go:87] Successfully saved all images to host disk.
	I1222 01:32:55.742044 2113859 cli_runner.go:164] Run: docker exec no-preload-491054 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:32:55.793131 2113859 oci.go:144] the created container "no-preload-491054" has a running status.
	I1222 01:32:55.793157 2113859 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa...
	I1222 01:32:56.150782 2113859 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:32:56.176874 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:56.199648 2113859 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:32:56.199777 2113859 kic_runner.go:114] Args: [docker exec --privileged no-preload-491054 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:32:56.287457 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:56.317221 2113859 machine.go:94] provisionDockerMachine start ...
	I1222 01:32:56.317311 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:56.348674 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:56.349032 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:56.349043 2113859 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:32:56.349725 2113859 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56696->127.0.0.1:38751: read: connection reset by peer
	I1222 01:32:59.497388 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:32:59.497478 2113859 ubuntu.go:182] provisioning hostname "no-preload-491054"
	I1222 01:32:59.497575 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:59.521272 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:59.521594 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:59.521607 2113859 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-491054 && echo "no-preload-491054" | sudo tee /etc/hostname
	I1222 01:32:59.673216 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:32:59.673380 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:59.713978 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:59.714276 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:59.714295 2113859 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491054/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:32:59.877634 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:32:59.877663 2113859 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:32:59.877705 2113859 ubuntu.go:190] setting up certificates
	I1222 01:32:59.877730 2113859 provision.go:84] configureAuth start
	I1222 01:32:59.877825 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:32:59.915162 2113859 provision.go:143] copyHostCerts
	I1222 01:32:59.915223 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:32:59.915233 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:32:59.915300 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:32:59.915379 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:32:59.915385 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:32:59.915410 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:32:59.915457 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:32:59.915462 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:32:59.915485 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:32:59.915527 2113859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.no-preload-491054 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-491054]
	I1222 01:33:00.019297 2113859 provision.go:177] copyRemoteCerts
	I1222 01:33:00.019469 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:33:00.019545 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.043873 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:00.239960 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:33:00.315095 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:33:00.352718 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:33:00.383851 2113859 provision.go:87] duration metric: took 506.093893ms to configureAuth
	I1222 01:33:00.383879 2113859 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:33:00.384086 2113859 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:33:00.384192 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.409890 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:33:00.410227 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:33:00.410243 2113859 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:33:00.748942 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:33:00.748964 2113859 machine.go:97] duration metric: took 4.431724259s to provisionDockerMachine
	I1222 01:33:00.748975 2113859 client.go:176] duration metric: took 6.457773322s to LocalClient.Create
	I1222 01:33:00.748997 2113859 start.go:167] duration metric: took 6.457839838s to libmachine.API.Create "no-preload-491054"
	I1222 01:33:00.749006 2113859 start.go:293] postStartSetup for "no-preload-491054" (driver="docker")
	I1222 01:33:00.749021 2113859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:33:00.749096 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:33:00.749155 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.768562 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:00.873634 2113859 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:33:00.877828 2113859 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:33:00.877854 2113859 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:33:00.877865 2113859 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:33:00.877922 2113859 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:33:00.878011 2113859 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:33:00.878120 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:33:00.885734 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:33:00.904977 2113859 start.go:296] duration metric: took 155.94443ms for postStartSetup
	I1222 01:33:00.905414 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:33:00.926393 2113859 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:33:00.926680 2113859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:33:00.926736 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.943814 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.043968 2113859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:33:01.049606 2113859 start.go:128] duration metric: took 6.762264752s to createHost
	I1222 01:33:01.049630 2113859 start.go:83] releasing machines lock for "no-preload-491054", held for 6.762389851s
	I1222 01:33:01.049702 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:33:01.074676 2113859 ssh_runner.go:195] Run: cat /version.json
	I1222 01:33:01.074732 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:01.074954 2113859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:33:01.075004 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:01.118316 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.119194 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.358638 2113859 ssh_runner.go:195] Run: systemctl --version
	I1222 01:33:01.368096 2113859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:33:01.437466 2113859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:33:01.443994 2113859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:33:01.444068 2113859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:33:01.492466 2113859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:33:01.492540 2113859 start.go:496] detecting cgroup driver to use...
	I1222 01:33:01.492587 2113859 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:33:01.492677 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:33:01.520160 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:33:01.553116 2113859 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:33:01.553278 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:33:01.578847 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:33:01.600093 2113859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:33:01.775882 2113859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:33:01.964970 2113859 docker.go:234] disabling docker service ...
	I1222 01:33:01.965089 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:33:01.996139 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:33:02.014677 2113859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:33:02.197851 2113859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:33:02.368250 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:33:02.384327 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:33:02.403515 2113859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:33:02.403650 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.415198 2113859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:33:02.415321 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.425372 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.434744 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.444077 2113859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:33:02.452445 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.461779 2113859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.476543 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.486290 2113859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:33:02.494924 2113859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:33:02.503372 2113859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:33:02.670149 2113859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:33:03.183820 2113859 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:33:03.183944 2113859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:33:03.191011 2113859 start.go:564] Will wait 60s for crictl version
	I1222 01:33:03.191125 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.195180 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:33:03.239592 2113859 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:33:03.239750 2113859 ssh_runner.go:195] Run: crio --version
	I1222 01:33:03.276484 2113859 ssh_runner.go:195] Run: crio --version
	I1222 01:33:03.325695 2113859 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:33:03.326970 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:33:03.347240 2113859 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:33:03.351832 2113859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:33:03.363978 2113859 kubeadm.go:884] updating cluster {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:33:03.364135 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:33:03.364193 2113859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:33:03.396066 2113859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1222 01:33:03.396096 2113859 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1222 01:33:03.396162 2113859 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:03.396375 2113859 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.396475 2113859 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.396559 2113859 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.396777 2113859 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.396934 2113859 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.397121 2113859 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.397230 2113859 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.400639 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.400925 2113859 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:03.401252 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.401482 2113859 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.401640 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.401776 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.401933 2113859 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.402175 2113859 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.651035 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1222 01:33:03.670795 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.672155 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.708386 2113859 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1222 01:33:03.708477 2113859 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.708554 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.737217 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.762779 2113859 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1222 01:33:03.762819 2113859 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.762930 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.763277 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.779160 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.853247 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.908757 2113859 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1222 01:33:03.908926 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:03.909017 2113859 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1222 01:33:03.909052 2113859 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.909090 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.908801 2113859 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.909157 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.949191 2113859 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1222 01:33:03.949417 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.949484 2113859 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.949532 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.979058 2113859 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1222 01:33:03.979103 2113859 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.979172 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:04.065704 2113859 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1222 01:33:04.065852 2113859 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.065919 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.066007 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:04.066112 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:04.066183 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.066412 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.066289 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:04.066318 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.232150 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.232281 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.232367 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.232456 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:04.232537 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.232627 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.232706 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:04.432423 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:33:04.432541 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.432706 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.432778 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1222 01:33:04.432869 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.432971 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.433075 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.433133 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.433231 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:04.577867 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.577952 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1222 01:33:04.578063 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:33:04.578171 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:04.578298 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.578396 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1222 01:33:04.578441 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1222 01:33:04.578533 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:04.578614 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:04.578711 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:04.578818 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:04.578938 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:33:04.579022 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:04.685252 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.685290 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1222 01:33:04.685411 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1222 01:33:04.685432 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1222 01:33:04.685507 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:04.685627 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:04.685705 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.685724 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1222 01:33:04.685827 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1222 01:33:04.685868 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	W1222 01:33:04.721125 2113859 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1222 01:33:04.721420 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:04.764232 2113859 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.764322 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.764378 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.764399 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1222 01:33:05.036404 2113859 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1222 01:33:05.036893 2113859 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:05.036972 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:05.413289 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1222 01:33:05.413338 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:05.413379 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:05.413450 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:07.584416 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (2.171016792s)
	I1222 01:33:07.584440 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1222 01:33:07.584457 2113859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:07.584508 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:07.584579 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.171103181s)
	I1222 01:33:07.584616 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:08.872326 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.287797891s)
	I1222 01:33:08.872352 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1222 01:33:08.872368 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:08.872423 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:08.872487 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.287861782s)
	I1222 01:33:08.872519 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:10.300919 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.428379011s)
	I1222 01:33:10.300966 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1222 01:33:10.301066 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:10.301154 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.428721212s)
	I1222 01:33:10.301167 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1222 01:33:10.301181 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:10.301207 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:11.471273 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.17003853s)
	I1222 01:33:11.471300 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1222 01:33:11.471326 2113859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:11.471376 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:11.471466 2113859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.170385514s)
	I1222 01:33:11.471503 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1222 01:33:11.471522 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1222 01:33:13.518323 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (2.046913435s)
	I1222 01:33:13.518351 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1222 01:33:13.518369 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:13.518418 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:15.056778 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.538332463s)
	I1222 01:33:15.056802 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1222 01:33:15.056819 2113859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:15.056877 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:15.669239 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1222 01:33:15.669279 2113859 cache_images.go:125] Successfully loaded all cached images
	I1222 01:33:15.669285 2113859 cache_images.go:94] duration metric: took 12.273177254s to LoadCachedImages
	I1222 01:33:15.669298 2113859 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:33:15.669428 2113859 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:33:15.669521 2113859 ssh_runner.go:195] Run: crio config
	I1222 01:33:15.766267 2113859 cni.go:84] Creating CNI manager for ""
	I1222 01:33:15.766317 2113859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:33:15.766335 2113859 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:33:15.766360 2113859 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491054 NodeName:no-preload-491054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:33:15.766489 2113859 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491054"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:33:15.766568 2113859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:33:15.775000 2113859 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1222 01:33:15.775076 2113859 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:33:15.783792 2113859 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1222 01:33:15.783912 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1222 01:33:15.784829 2113859 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet
	I1222 01:33:15.785387 2113859 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm
	I1222 01:33:15.789407 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1222 01:33:15.789452 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1222 01:33:16.870782 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:33:16.893005 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1222 01:33:16.897126 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1222 01:33:16.897163 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1222 01:33:17.155180 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1222 01:33:17.172327 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1222 01:33:17.172369 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1222 01:33:17.633126 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:33:17.641170 2113859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:33:17.657290 2113859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:33:17.672129 2113859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 01:33:17.686042 2113859 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:33:17.690564 2113859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:33:17.701363 2113859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:33:17.830376 2113859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:33:17.851150 2113859 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054 for IP: 192.168.76.2
	I1222 01:33:17.851171 2113859 certs.go:195] generating shared ca certs ...
	I1222 01:33:17.851188 2113859 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.851322 2113859 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:33:17.851370 2113859 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:33:17.851382 2113859 certs.go:257] generating profile certs ...
	I1222 01:33:17.851434 2113859 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key
	I1222 01:33:17.851451 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt with IP's: []
	I1222 01:33:17.979141 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt ...
	I1222 01:33:17.979169 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: {Name:mkcc53105d7ba3a27d1d6f12bb0c9eed295215c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.979387 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key ...
	I1222 01:33:17.979402 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key: {Name:mke07de840627684a57c31550e8378dc9e95cd66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.979495 2113859 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a
	I1222 01:33:17.979512 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:33:18.564226 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a ...
	I1222 01:33:18.564256 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a: {Name:mkd8e629f62eab359f7658bb5f9017eca71fb865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.564456 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a ...
	I1222 01:33:18.564471 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a: {Name:mke0f2bc960eb03fd7ec564dbd0be938a1e53d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.564555 2113859 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt
	I1222 01:33:18.564630 2113859 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key
	I1222 01:33:18.564692 2113859 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key
	I1222 01:33:18.564710 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt with IP's: []
	I1222 01:33:18.841279 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt ...
	I1222 01:33:18.841307 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt: {Name:mkccabd6e6a4f8a64f07dedbf98fb912005ff0f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.841487 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key ...
	I1222 01:33:18.841501 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key: {Name:mk865b3aaa0336b2f3bf58a8de85bdb9588e55ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.841685 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:33:18.841733 2113859 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:33:18.841746 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:33:18.841773 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:33:18.841800 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:33:18.841864 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:33:18.841917 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:33:18.842483 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:33:18.860286 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:33:18.878323 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:33:18.900560 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:33:18.920760 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:33:18.940517 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:33:18.962227 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:33:18.980595 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:33:19.000729 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:33:19.021971 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:33:19.040728 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:33:19.058835 2113859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:33:19.072001 2113859 ssh_runner.go:195] Run: openssl version
	I1222 01:33:19.079871 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.094194 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:33:19.108368 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.112634 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.112704 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.156081 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:33:19.163807 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:33:19.171280 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.178589 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:33:19.186351 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.190135 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.190200 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.231471 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:33:19.239260 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:33:19.246949 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.254373 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:33:19.262303 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.266149 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.266256 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.308645 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:33:19.316489 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:33:19.324189 2113859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:33:19.327847 2113859 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:33:19.327926 2113859 kubeadm.go:401] StartCluster: {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:33:19.328012 2113859 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:33:19.328077 2113859 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:33:19.367450 2113859 cri.go:96] found id: ""
	I1222 01:33:19.367568 2113859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:33:19.379376 2113859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:33:19.390640 2113859 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:33:19.390726 2113859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:33:19.407000 2113859 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:33:19.407021 2113859 kubeadm.go:158] found existing configuration files:
	
	I1222 01:33:19.407093 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:33:19.414853 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:33:19.414947 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:33:19.423608 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:33:19.431372 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:33:19.431496 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:33:19.439057 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:33:19.447157 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:33:19.447221 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:33:19.455751 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:33:19.463261 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:33:19.463374 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:33:19.470637 2113859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:33:19.512148 2113859 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:33:19.512520 2113859 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:33:19.593167 2113859 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:33:19.593327 2113859 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:33:19.593393 2113859 kubeadm.go:319] OS: Linux
	I1222 01:33:19.593474 2113859 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:33:19.593555 2113859 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:33:19.593644 2113859 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:33:19.593721 2113859 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:33:19.593798 2113859 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:33:19.593869 2113859 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:33:19.593949 2113859 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:33:19.594020 2113859 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:33:19.594096 2113859 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:33:19.662779 2113859 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:33:19.662894 2113859 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:33:19.662988 2113859 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:33:19.682584 2113859 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:33:19.688410 2113859 out.go:252]   - Generating certificates and keys ...
	I1222 01:33:19.688524 2113859 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:33:19.688595 2113859 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:33:20.151895 2113859 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:33:20.401703 2113859 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:33:21.005040 2113859 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:33:21.140685 2113859 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:33:21.824673 2113859 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:33:21.825252 2113859 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:33:22.020671 2113859 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:33:22.020933 2113859 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:33:22.474306 2113859 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:33:22.884092 2113859 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:33:23.034492 2113859 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:33:23.034774 2113859 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:33:23.075352 2113859 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:33:23.135102 2113859 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:33:23.251153 2113859 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:33:23.986932 2113859 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:33:24.273870 2113859 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:33:24.274658 2113859 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:33:24.277328 2113859 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:33:24.281021 2113859 out.go:252]   - Booting up control plane ...
	I1222 01:33:24.281119 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:33:24.281198 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:33:24.281264 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:33:24.298176 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:33:24.298295 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:33:24.306342 2113859 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:33:24.306969 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:33:24.307038 2113859 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:33:24.444902 2113859 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:33:24.445025 2113859 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:37:24.445159 2113859 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000649464s
	I1222 01:37:24.445188 2113859 kubeadm.go:319] 
	I1222 01:37:24.445245 2113859 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:37:24.445278 2113859 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:37:24.445510 2113859 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:37:24.445521 2113859 kubeadm.go:319] 
	I1222 01:37:24.445625 2113859 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:37:24.445657 2113859 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:37:24.445688 2113859 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:37:24.445692 2113859 kubeadm.go:319] 
	I1222 01:37:24.449085 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:37:24.449561 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:37:24.449681 2113859 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:37:24.449965 2113859 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:37:24.449971 2113859 kubeadm.go:319] 
	I1222 01:37:24.450044 2113859 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:37:24.450217 2113859 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000649464s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000649464s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:37:24.450303 2113859 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 01:37:24.890509 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:37:24.915976 2113859 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:37:24.916036 2113859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:37:24.926777 2113859 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:37:24.926795 2113859 kubeadm.go:158] found existing configuration files:
	
	I1222 01:37:24.926846 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:37:24.935332 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:37:24.935391 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:37:24.943361 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:37:24.952220 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:37:24.952286 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:37:24.960609 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:37:24.969890 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:37:24.969948 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:37:24.978647 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:37:24.987603 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:37:24.987662 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:37:24.999818 2113859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:37:25.076514 2113859 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:37:25.087372 2113859 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:37:25.206650 2113859 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:37:25.206722 2113859 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:37:25.206758 2113859 kubeadm.go:319] OS: Linux
	I1222 01:37:25.206804 2113859 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:37:25.206853 2113859 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:37:25.206900 2113859 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:37:25.206949 2113859 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:37:25.206997 2113859 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:37:25.207051 2113859 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:37:25.207097 2113859 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:37:25.207145 2113859 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:37:25.207191 2113859 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:37:25.290369 2113859 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:37:25.290485 2113859 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:37:25.290577 2113859 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:37:25.313723 2113859 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:37:25.319101 2113859 out.go:252]   - Generating certificates and keys ...
	I1222 01:37:25.319194 2113859 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:37:25.319264 2113859 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:37:25.319343 2113859 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:37:25.319409 2113859 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:37:25.319483 2113859 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:37:25.319543 2113859 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:37:25.319610 2113859 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:37:25.319675 2113859 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:37:25.319755 2113859 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:37:25.319830 2113859 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:37:25.320128 2113859 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:37:25.320259 2113859 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:37:25.579598 2113859 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:37:26.290496 2113859 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:37:26.459743 2113859 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:37:26.687849 2113859 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:37:27.003327 2113859 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:37:27.003462 2113859 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:37:27.005755 2113859 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:37:27.009338 2113859 out.go:252]   - Booting up control plane ...
	I1222 01:37:27.009453 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:37:27.009975 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:37:27.011908 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:37:27.027550 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:37:27.027957 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:37:27.035510 2113859 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:37:27.035816 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:37:27.035861 2113859 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:37:27.169794 2113859 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:37:27.169917 2113859 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:41:27.168234 2113859 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000189279s
	I1222 01:41:27.168261 2113859 kubeadm.go:319] 
	I1222 01:41:27.168319 2113859 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:41:27.168358 2113859 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:41:27.168485 2113859 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:41:27.168497 2113859 kubeadm.go:319] 
	I1222 01:41:27.168608 2113859 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:41:27.168644 2113859 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:41:27.168684 2113859 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:41:27.168692 2113859 kubeadm.go:319] 
	I1222 01:41:27.172924 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:41:27.173370 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:41:27.173496 2113859 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.173763 2113859 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:41:27.173774 2113859 kubeadm.go:319] 
	I1222 01:41:27.173843 2113859 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:41:27.173900 2113859 kubeadm.go:403] duration metric: took 8m7.846004041s to StartCluster
	I1222 01:41:27.173948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:27.174010 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:27.198460 2113859 cri.go:96] found id: ""
	I1222 01:41:27.198503 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.198512 2113859 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:27.198519 2113859 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:41:27.198581 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:27.223806 2113859 cri.go:96] found id: ""
	I1222 01:41:27.223832 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.223840 2113859 logs.go:284] No container was found matching "etcd"
	I1222 01:41:27.223847 2113859 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:41:27.223904 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:27.250017 2113859 cri.go:96] found id: ""
	I1222 01:41:27.250042 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.250051 2113859 logs.go:284] No container was found matching "coredns"
	I1222 01:41:27.250068 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:27.250129 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:27.275909 2113859 cri.go:96] found id: ""
	I1222 01:41:27.275934 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.275942 2113859 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:27.275948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:27.276004 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:27.300182 2113859 cri.go:96] found id: ""
	I1222 01:41:27.300215 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.300225 2113859 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:27.300231 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:27.300293 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:27.324772 2113859 cri.go:96] found id: ""
	I1222 01:41:27.324797 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.324806 2113859 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:27.324813 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:27.324868 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:27.353970 2113859 cri.go:96] found id: ""
	I1222 01:41:27.354043 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.354058 2113859 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:27.354068 2113859 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:27.354081 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:27.422083 2113859 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:27.422119 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:27.438918 2113859 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:27.438950 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:27.507814 2113859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:27.507835 2113859 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:41:27.507846 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:41:27.550141 2113859 logs.go:123] Gathering logs for container status ...
	I1222 01:41:27.550178 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:27.578268 2113859 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:41:27.578345 2113859 out.go:285] * 
	* 
	W1222 01:41:27.578396 2113859 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.578413 2113859 out.go:285] * 
	* 
	W1222 01:41:27.580529 2113859 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:41:27.585764 2113859 out.go:203] 
	W1222 01:41:27.589556 2113859 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.589611 2113859 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:41:27.589635 2113859 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:41:27.592761 2113859 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-491054
helpers_test.go:244: (dbg) docker inspect no-preload-491054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	        "Created": "2025-12-22T01:32:55.262645623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2114161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:32:55.338008768Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hostname",
	        "HostsPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hosts",
	        "LogPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe-json.log",
	        "Name": "/no-preload-491054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	                "LowerDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-491054",
	                "Source": "/var/lib/docker/volumes/no-preload-491054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491054",
	                "name.minikube.sigs.k8s.io": "no-preload-491054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0028d366ff0985d5277d2d2f90b7ddf8e366e82e83e7670b234adfbab8a808ab",
	            "SandboxKey": "/var/run/docker/netns/0028d366ff09",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38751"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38752"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38755"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38753"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38754"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:04:e9:7c:da:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19fc401d8662d0cf8941c7bbec5156d3018f76638db0a4459b76764a03803463",
	                    "EndpointID": "31fb2fe9d31e4ac100d348896a6811cbe2d47df53862df096768f216d2449c34",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491054",
	                        "313e04d6776e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054: exit status 6 (314.591245ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:41:28.060687 2138756 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491054 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:38:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:38:34.723707 2134286 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:34.723823 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.723833 2134286 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:34.723839 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.724123 2134286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:38:34.724545 2134286 out.go:368] Setting JSON to false
	I1222 01:38:34.725446 2134286 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116464,"bootTime":1766251051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:38:34.725511 2134286 start.go:143] virtualization:  
	I1222 01:38:34.729663 2134286 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:34.733886 2134286 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:34.734036 2134286 notify.go:221] Checking for updates...
	I1222 01:38:34.740441 2134286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:34.743585 2134286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:38:34.746751 2134286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:38:34.749869 2134286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:34.752958 2134286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:34.756493 2134286 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:34.756622 2134286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:34.783091 2134286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:34.783353 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.839782 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.830338261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.839888 2134286 docker.go:319] overlay module found
	I1222 01:38:34.845569 2134286 out.go:179] * Using the docker driver based on user configuration
	I1222 01:38:34.848592 2134286 start.go:309] selected driver: docker
	I1222 01:38:34.848611 2134286 start.go:928] validating driver "docker" against <nil>
	I1222 01:38:34.848634 2134286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:34.849297 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.935540 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.926549754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.935703 2134286 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:38:34.935733 2134286 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:38:34.935959 2134286 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:34.938976 2134286 out.go:179] * Using Docker driver with root privileges
	I1222 01:38:34.941882 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:34.941953 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:34.941972 2134286 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:38:34.942050 2134286 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:34.945127 2134286 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:38:34.947951 2134286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:38:34.950705 2134286 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:34.953516 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:34.953561 2134286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:38:34.953574 2134286 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:34.953598 2134286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:34.953659 2134286 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:34.953669 2134286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:38:34.953790 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:34.953808 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json: {Name:mkbd82291dca772e741ca6a4965901d8562f02f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:34.972608 2134286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:34.972632 2134286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:34.972653 2134286 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:34.972687 2134286 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:34.972789 2134286 start.go:364] duration metric: took 81.072µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:38:34.972826 2134286 start.go:93] Provisioning new machine with config: &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:38:34.972897 2134286 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:38:34.976294 2134286 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:38:34.976517 2134286 start.go:159] libmachine.API.Create for "newest-cni-704217" (driver="docker")
	I1222 01:38:34.976560 2134286 client.go:173] LocalClient.Create starting
	I1222 01:38:34.976631 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:38:34.976666 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976687 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.976752 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:38:34.976774 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976789 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.977153 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:38:34.991933 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:38:34.992025 2134286 network_create.go:284] running [docker network inspect newest-cni-704217] to gather additional debugging logs...
	I1222 01:38:34.992051 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217
	W1222 01:38:35.009843 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 returned with exit code 1
	I1222 01:38:35.009879 2134286 network_create.go:287] error running [docker network inspect newest-cni-704217]: docker network inspect newest-cni-704217: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-704217 not found
	I1222 01:38:35.009894 2134286 network_create.go:289] output of [docker network inspect newest-cni-704217]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-704217 not found
	
	** /stderr **
	I1222 01:38:35.009998 2134286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:35.028311 2134286 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:38:35.028684 2134286 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:38:35.029053 2134286 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:38:35.029412 2134286 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 01:38:35.029864 2134286 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5ab0}
	I1222 01:38:35.029890 2134286 network_create.go:124] attempt to create docker network newest-cni-704217 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:38:35.029949 2134286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-704217 newest-cni-704217
	I1222 01:38:35.091875 2134286 network_create.go:108] docker network newest-cni-704217 192.168.85.0/24 created
	I1222 01:38:35.091908 2134286 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-704217" container
	I1222 01:38:35.091998 2134286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:38:35.109188 2134286 cli_runner.go:164] Run: docker volume create newest-cni-704217 --label name.minikube.sigs.k8s.io=newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:38:35.127123 2134286 oci.go:103] Successfully created a docker volume newest-cni-704217
	I1222 01:38:35.127220 2134286 cli_runner.go:164] Run: docker run --rm --name newest-cni-704217-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --entrypoint /usr/bin/test -v newest-cni-704217:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:38:35.650646 2134286 oci.go:107] Successfully prepared a docker volume newest-cni-704217
	I1222 01:38:35.650718 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:35.650728 2134286 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:38:35.650802 2134286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:38:39.466788 2134286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.81593333s)
	I1222 01:38:39.466823 2134286 kic.go:203] duration metric: took 3.816091086s to extract preloaded images to volume ...
	W1222 01:38:39.466970 2134286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:38:39.467078 2134286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:38:39.518928 2134286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-704217 --name newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-704217 --network newest-cni-704217 --ip 192.168.85.2 --volume newest-cni-704217:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:38:39.824489 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Running}}
	I1222 01:38:39.845273 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:39.868132 2134286 cli_runner.go:164] Run: docker exec newest-cni-704217 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:38:39.918857 2134286 oci.go:144] the created container "newest-cni-704217" has a running status.
	I1222 01:38:39.918885 2134286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa...
	I1222 01:38:40.057379 2134286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:38:40.081663 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.113212 2134286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:38:40.113247 2134286 kic_runner.go:114] Args: [docker exec --privileged newest-cni-704217 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:38:40.193617 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.220327 2134286 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:40.220433 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:40.250962 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:40.251306 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:40.251316 2134286 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:40.251882 2134286 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51396->127.0.0.1:38776: read: connection reset by peer
	I1222 01:38:43.385195 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.385223 2134286 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:38:43.385289 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.406922 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.407255 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.407271 2134286 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:38:43.550918 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.550996 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.569002 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.569357 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.569382 2134286 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:43.701601 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:43.701676 2134286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:38:43.701715 2134286 ubuntu.go:190] setting up certificates
	I1222 01:38:43.701726 2134286 provision.go:84] configureAuth start
	I1222 01:38:43.701795 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:43.718903 2134286 provision.go:143] copyHostCerts
	I1222 01:38:43.718967 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:38:43.718976 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:38:43.719054 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:38:43.719156 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:38:43.719162 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:38:43.719188 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:38:43.719246 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:38:43.719251 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:38:43.719274 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:38:43.719328 2134286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:38:43.977508 2134286 provision.go:177] copyRemoteCerts
	I1222 01:38:43.977583 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:43.977627 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.995727 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.093656 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:44.115417 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:44.140528 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:38:44.162225 2134286 provision.go:87] duration metric: took 460.475764ms to configureAuth
	I1222 01:38:44.162253 2134286 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:44.162450 2134286 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:44.162564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.180792 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:44.181103 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:44.181121 2134286 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:38:44.469915 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:38:44.469949 2134286 machine.go:97] duration metric: took 4.249594356s to provisionDockerMachine
	I1222 01:38:44.469959 2134286 client.go:176] duration metric: took 9.493388716s to LocalClient.Create
	I1222 01:38:44.469974 2134286 start.go:167] duration metric: took 9.493457957s to libmachine.API.Create "newest-cni-704217"
	I1222 01:38:44.469987 2134286 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:38:44.469998 2134286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:44.470072 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:44.470110 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.487566 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.585312 2134286 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:44.588561 2134286 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:44.588588 2134286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:44.588600 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:38:44.588655 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:38:44.588735 2134286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:38:44.588834 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:44.596094 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:44.613273 2134286 start.go:296] duration metric: took 143.27184ms for postStartSetup
	I1222 01:38:44.613679 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.630261 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:44.630541 2134286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:44.630589 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.647052 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.738630 2134286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:44.743420 2134286 start.go:128] duration metric: took 9.770509222s to createHost
	I1222 01:38:44.743497 2134286 start.go:83] releasing machines lock for "newest-cni-704217", held for 9.770692552s
	I1222 01:38:44.743588 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.760151 2134286 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:44.760209 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.760507 2134286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:44.760564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.782041 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.788656 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.973622 2134286 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:44.980619 2134286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:38:45.057102 2134286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:45.064439 2134286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:45.064557 2134286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:45.103917 2134286 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:38:45.103956 2134286 start.go:496] detecting cgroup driver to use...
	I1222 01:38:45.103997 2134286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:45.104069 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:38:45.126644 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:38:45.158107 2134286 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:45.158598 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:45.189553 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:45.237377 2134286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:45.372783 2134286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:45.495736 2134286 docker.go:234] disabling docker service ...
	I1222 01:38:45.495825 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:45.520536 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:45.534342 2134286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:45.667915 2134286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:45.795197 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:45.808465 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:45.822173 2134286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:38:45.822242 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.830599 2134286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:38:45.830673 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.839239 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.847549 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.856111 2134286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:45.863959 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.872612 2134286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.886086 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.895578 2134286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:45.903269 2134286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:45.910730 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.026286 2134286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:38:46.193052 2134286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:38:46.193121 2134286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:38:46.197063 2134286 start.go:564] Will wait 60s for crictl version
	I1222 01:38:46.197156 2134286 ssh_runner.go:195] Run: which crictl
	I1222 01:38:46.200956 2134286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:46.225282 2134286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:38:46.225459 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.257224 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.288579 2134286 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:38:46.291328 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:46.307792 2134286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:46.311726 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.324397 2134286 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:38:46.327309 2134286 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:46.327454 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:46.327539 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.376288 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.376313 2134286 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:38:46.376388 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.406533 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.406556 2134286 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:46.406564 2134286 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:38:46.406653 2134286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:46.406737 2134286 ssh_runner.go:195] Run: crio config
	I1222 01:38:46.477901 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:46.477922 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:46.477963 2134286 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:46.477999 2134286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:46.478139 2134286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:46.478220 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:46.486049 2134286 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:46.486137 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:46.493605 2134286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:38:46.506655 2134286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:46.520439 2134286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:38:46.533218 2134286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:46.537007 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.546792 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.662666 2134286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:46.679594 2134286 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:38:46.679659 2134286 certs.go:195] generating shared ca certs ...
	I1222 01:38:46.679688 2134286 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.679859 2134286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:38:46.679932 2134286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:38:46.679968 2134286 certs.go:257] generating profile certs ...
	I1222 01:38:46.680044 2134286 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:38:46.680085 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt with IP's: []
	I1222 01:38:46.841660 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt ...
	I1222 01:38:46.841690 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt: {Name:mk0a220848ae73520e360330c9832985737c86ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.841917 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key ...
	I1222 01:38:46.841933 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key: {Name:mk3fb1e07728181520997d7299a7d80c9afcd3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.842027 2134286 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:38:46.842046 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:38:46.952611 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 ...
	I1222 01:38:46.952645 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856: {Name:mk23d75e59a433395bcccffb2237600ddcfba975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952842 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 ...
	I1222 01:38:46.952855 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856: {Name:mke91b318f6e1a5e82c06575fddda672305e0a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952949 2134286 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt
	I1222 01:38:46.953039 2134286 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key
	I1222 01:38:46.953108 2134286 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:38:46.953127 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt with IP's: []
	I1222 01:38:47.135005 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt ...
	I1222 01:38:47.135039 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt: {Name:mk0bf95a48d358b9b6753ae55278429fc5cfa2f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135258 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key ...
	I1222 01:38:47.135281 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key: {Name:mk2786f86b96c0685cea4c15b093d231aba8608c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135484 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:38:47.135534 2134286 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:47.135550 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:38:47.135580 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:47.135609 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:47.135636 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:47.135686 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:47.137405 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:47.165131 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:47.186245 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:47.206617 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:47.225290 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:47.243359 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:38:47.261575 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:47.279279 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:47.296796 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:38:47.314798 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:38:47.332337 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:47.350465 2134286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:47.363436 2134286 ssh_runner.go:195] Run: openssl version
	I1222 01:38:47.370127 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.377515 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:47.384963 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388630 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388716 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.430642 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:47.438907 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:38:47.446439 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.453815 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:38:47.461441 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465092 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465189 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.506277 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:47.513878 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:38:47.521311 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.528445 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:38:47.535982 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539602 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539691 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.580405 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.587812 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.602092 2134286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:47.606351 2134286 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:38:47.606451 2134286 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:47.606588 2134286 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:47.606681 2134286 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:47.642649 2134286 cri.go:96] found id: ""
	I1222 01:38:47.642790 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:47.652615 2134286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:38:47.661497 2134286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:38:47.661607 2134286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:38:47.669765 2134286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:38:47.669786 2134286 kubeadm.go:158] found existing configuration files:
	
	I1222 01:38:47.669862 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:38:47.678062 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:38:47.678185 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:38:47.685567 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:38:47.693826 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:38:47.693987 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:38:47.702486 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.710049 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:38:47.710137 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.717426 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:38:47.724768 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:38:47.724863 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:38:47.731996 2134286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:38:47.851611 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:38:47.852034 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:38:47.938478 2134286 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.168234 2113859 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000189279s
	I1222 01:41:27.168261 2113859 kubeadm.go:319] 
	I1222 01:41:27.168319 2113859 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:41:27.168358 2113859 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:41:27.168485 2113859 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:41:27.168497 2113859 kubeadm.go:319] 
	I1222 01:41:27.168608 2113859 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:41:27.168644 2113859 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:41:27.168684 2113859 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:41:27.168692 2113859 kubeadm.go:319] 
	I1222 01:41:27.172924 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:41:27.173370 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:41:27.173496 2113859 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.173763 2113859 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:41:27.173774 2113859 kubeadm.go:319] 
	I1222 01:41:27.173843 2113859 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:41:27.173900 2113859 kubeadm.go:403] duration metric: took 8m7.846004041s to StartCluster
	I1222 01:41:27.173948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:27.174010 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:27.198460 2113859 cri.go:96] found id: ""
	I1222 01:41:27.198503 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.198512 2113859 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:27.198519 2113859 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:41:27.198581 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:27.223806 2113859 cri.go:96] found id: ""
	I1222 01:41:27.223832 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.223840 2113859 logs.go:284] No container was found matching "etcd"
	I1222 01:41:27.223847 2113859 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:41:27.223904 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:27.250017 2113859 cri.go:96] found id: ""
	I1222 01:41:27.250042 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.250051 2113859 logs.go:284] No container was found matching "coredns"
	I1222 01:41:27.250068 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:27.250129 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:27.275909 2113859 cri.go:96] found id: ""
	I1222 01:41:27.275934 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.275942 2113859 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:27.275948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:27.276004 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:27.300182 2113859 cri.go:96] found id: ""
	I1222 01:41:27.300215 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.300225 2113859 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:27.300231 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:27.300293 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:27.324772 2113859 cri.go:96] found id: ""
	I1222 01:41:27.324797 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.324806 2113859 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:27.324813 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:27.324868 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:27.353970 2113859 cri.go:96] found id: ""
	I1222 01:41:27.354043 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.354058 2113859 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:27.354068 2113859 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:27.354081 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:27.422083 2113859 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:27.422119 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:27.438918 2113859 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:27.438950 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:27.507814 2113859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:27.507835 2113859 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:41:27.507846 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:41:27.550141 2113859 logs.go:123] Gathering logs for container status ...
	I1222 01:41:27.550178 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:27.578268 2113859 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:41:27.578345 2113859 out.go:285] * 
	W1222 01:41:27.578396 2113859 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.578413 2113859 out.go:285] * 
	W1222 01:41:27.580529 2113859 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:41:27.585764 2113859 out.go:203] 
	W1222 01:41:27.589556 2113859 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.589611 2113859 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:41:27.589635 2113859 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:41:27.592761 2113859 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 01:33:04 no-preload-491054 crio[840]: time="2025-12-22T01:33:04.665075722Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-rc.1 not found" id=dd506993-a528-4423-8950-b0587f89a1dd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:04 no-preload-491054 crio[840]: time="2025-12-22T01:33:04.665126248Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-rc.1 found" id=dd506993-a528-4423-8950-b0587f89a1dd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464435752Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464781145Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464820692Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.637248018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.63753477Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.637589932Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903489965Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903812647Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903864485Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.666376352Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=d373ec24-b32f-4bc7-b1ee-dbc429c84518 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.670731699Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=985533ce-73ee-492d-bec3-e3f7745742a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.672178252Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=4324ee88-2078-4e1d-ac8c-226946777419 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.673730164Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=94416a1e-c2e8-4002-b2df-c67290c0deda name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.675651066Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=26b90d19-1583-4402-8028-995cec1d8c95 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.678449012Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4b69cbcb-1564-462b-975a-a87b64ce4d87 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.680670114Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=7125d5bc-0162-450e-a7bb-265b2a72453d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.297315654Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=7a582268-ead4-4858-9bcb-880317cde6d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.301718861Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=758439ba-2257-42b2-b322-7eee997029e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.303295905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=6d1d7a92-5737-428e-8f96-ffefc25ceae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.305283578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ebe1fb9e-67e5-4ffb-b4bd-137d43ff137b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.306347021Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6bb0a1d2-344b-4b71-8ad1-df4251af2703 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.309548352Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3c577299-eacf-44a2-91b2-dea64d0df8f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.310598748Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c9f46b28-9cc1-401a-9feb-ff7f5b2f29b5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:28.713468    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:28.714207    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:28.715822    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:28.716502    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:28.718107    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:41:28 up 1 day,  8:23,  0 user,  load average: 0.37, 1.45, 1.98
	Linux no-preload-491054 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:41:26 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:26 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 22 01:41:26 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:26 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:26 no-preload-491054 kubelet[5510]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:26 no-preload-491054 kubelet[5510]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:26 no-preload-491054 kubelet[5510]: E1222 01:41:26.914365    5510 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:26 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:26 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:27 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 22 01:41:27 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:27 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:27 no-preload-491054 kubelet[5596]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:27 no-preload-491054 kubelet[5596]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:27 no-preload-491054 kubelet[5596]: E1222 01:41:27.688259    5596 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:27 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:27 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:28 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 22 01:41:28 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:28 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:28 no-preload-491054 kubelet[5624]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:28 no-preload-491054 kubelet[5624]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:28 no-preload-491054 kubelet[5624]: E1222 01:41:28.419804    5624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:28 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:28 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 6 (390.052204ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:41:29.215718 2138975 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (515.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-535197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-535197 --alsologtostderr -v=1: exit status 80 (1.703353332s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-535197 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:33:33.364240 2116786 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:33:33.364439 2116786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:33:33.364469 2116786 out.go:374] Setting ErrFile to fd 2...
	I1222 01:33:33.364488 2116786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:33:33.364783 2116786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:33:33.365104 2116786 out.go:368] Setting JSON to false
	I1222 01:33:33.365154 2116786 mustload.go:66] Loading cluster: old-k8s-version-535197
	I1222 01:33:33.365642 2116786 config.go:182] Loaded profile config "old-k8s-version-535197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1222 01:33:33.366156 2116786 cli_runner.go:164] Run: docker container inspect old-k8s-version-535197 --format={{.State.Status}}
	I1222 01:33:33.384698 2116786 host.go:66] Checking if "old-k8s-version-535197" exists ...
	I1222 01:33:33.385042 2116786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:33:33.457866 2116786 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-22 01:33:33.448703268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:33:33.458504 2116786 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-535197 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdateno
tification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1222 01:33:33.461966 2116786 out.go:179] * Pausing node old-k8s-version-535197 ... 
	I1222 01:33:33.465658 2116786 host.go:66] Checking if "old-k8s-version-535197" exists ...
	I1222 01:33:33.465989 2116786 ssh_runner.go:195] Run: systemctl --version
	I1222 01:33:33.466047 2116786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-535197
	I1222 01:33:33.483515 2116786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38746 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/old-k8s-version-535197/id_rsa Username:docker}
	I1222 01:33:33.575955 2116786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:33:33.588760 2116786 pause.go:52] kubelet running: true
	I1222 01:33:33.588840 2116786 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:33:33.836638 2116786 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:33:33.836721 2116786 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:33:33.904770 2116786 cri.go:96] found id: "2cdb0c07e974428487346498e876bb16c65cfa7e95e5303ed7f92aa135b676a5"
	I1222 01:33:33.904792 2116786 cri.go:96] found id: "b58fe534665d48104bc15d15f4c6f0ece29c6665e8419ee2a5bdd0272b03c474"
	I1222 01:33:33.904798 2116786 cri.go:96] found id: "b2b297128099a31b08568ff2228dd1cd7119453d1eba86c528fe063e19bda8c0"
	I1222 01:33:33.904801 2116786 cri.go:96] found id: "336bb7045b1cd554f929f862cf83e52a582bfa5bccb3fd3858ad9bbd6ca94f04"
	I1222 01:33:33.904805 2116786 cri.go:96] found id: "e20ee4063c483434ea2065bdae477712ff0bd65c9cc1905a7305884c3a1d5a61"
	I1222 01:33:33.904808 2116786 cri.go:96] found id: "71f0891781692fab9a2c32b04d877bfefe34482d11319ba2640e98531c0dc00a"
	I1222 01:33:33.904811 2116786 cri.go:96] found id: "58d08a0c5771635b028dd68e96fe171c409dd75d3491d6f68517244657c139ff"
	I1222 01:33:33.904814 2116786 cri.go:96] found id: "ee1dc9c8ebfd92a5a6c114d952ca8de817933c150946b2636677ec572aa63702"
	I1222 01:33:33.904817 2116786 cri.go:96] found id: "6458377dd96fa4bd80200d2466d86226b03550784e2c540f23e3138350d286c4"
	I1222 01:33:33.904825 2116786 cri.go:96] found id: "213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e"
	I1222 01:33:33.904829 2116786 cri.go:96] found id: "53cf84266072177d217ef2457215faa2f0d9e68a7d7cb456e860ff12d6878b3c"
	I1222 01:33:33.904832 2116786 cri.go:96] found id: ""
	I1222 01:33:33.904891 2116786 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:33:33.916262 2116786 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:33:33Z" level=error msg="open /run/runc: no such file or directory"
	I1222 01:33:34.177617 2116786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:33:34.190781 2116786 pause.go:52] kubelet running: false
	I1222 01:33:34.190867 2116786 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:33:34.352769 2116786 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:33:34.352937 2116786 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:33:34.421774 2116786 cri.go:96] found id: "2cdb0c07e974428487346498e876bb16c65cfa7e95e5303ed7f92aa135b676a5"
	I1222 01:33:34.421799 2116786 cri.go:96] found id: "b58fe534665d48104bc15d15f4c6f0ece29c6665e8419ee2a5bdd0272b03c474"
	I1222 01:33:34.421804 2116786 cri.go:96] found id: "b2b297128099a31b08568ff2228dd1cd7119453d1eba86c528fe063e19bda8c0"
	I1222 01:33:34.421808 2116786 cri.go:96] found id: "336bb7045b1cd554f929f862cf83e52a582bfa5bccb3fd3858ad9bbd6ca94f04"
	I1222 01:33:34.421812 2116786 cri.go:96] found id: "e20ee4063c483434ea2065bdae477712ff0bd65c9cc1905a7305884c3a1d5a61"
	I1222 01:33:34.421831 2116786 cri.go:96] found id: "71f0891781692fab9a2c32b04d877bfefe34482d11319ba2640e98531c0dc00a"
	I1222 01:33:34.421834 2116786 cri.go:96] found id: "58d08a0c5771635b028dd68e96fe171c409dd75d3491d6f68517244657c139ff"
	I1222 01:33:34.421837 2116786 cri.go:96] found id: "ee1dc9c8ebfd92a5a6c114d952ca8de817933c150946b2636677ec572aa63702"
	I1222 01:33:34.421841 2116786 cri.go:96] found id: "6458377dd96fa4bd80200d2466d86226b03550784e2c540f23e3138350d286c4"
	I1222 01:33:34.421847 2116786 cri.go:96] found id: "213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e"
	I1222 01:33:34.421851 2116786 cri.go:96] found id: "53cf84266072177d217ef2457215faa2f0d9e68a7d7cb456e860ff12d6878b3c"
	I1222 01:33:34.421854 2116786 cri.go:96] found id: ""
	I1222 01:33:34.421910 2116786 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:33:34.690548 2116786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:33:34.703618 2116786 pause.go:52] kubelet running: false
	I1222 01:33:34.703697 2116786 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:33:34.889881 2116786 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:33:34.889969 2116786 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:33:34.958267 2116786 cri.go:96] found id: "2cdb0c07e974428487346498e876bb16c65cfa7e95e5303ed7f92aa135b676a5"
	I1222 01:33:34.958340 2116786 cri.go:96] found id: "b58fe534665d48104bc15d15f4c6f0ece29c6665e8419ee2a5bdd0272b03c474"
	I1222 01:33:34.958357 2116786 cri.go:96] found id: "b2b297128099a31b08568ff2228dd1cd7119453d1eba86c528fe063e19bda8c0"
	I1222 01:33:34.958375 2116786 cri.go:96] found id: "336bb7045b1cd554f929f862cf83e52a582bfa5bccb3fd3858ad9bbd6ca94f04"
	I1222 01:33:34.958416 2116786 cri.go:96] found id: "e20ee4063c483434ea2065bdae477712ff0bd65c9cc1905a7305884c3a1d5a61"
	I1222 01:33:34.958438 2116786 cri.go:96] found id: "71f0891781692fab9a2c32b04d877bfefe34482d11319ba2640e98531c0dc00a"
	I1222 01:33:34.958454 2116786 cri.go:96] found id: "58d08a0c5771635b028dd68e96fe171c409dd75d3491d6f68517244657c139ff"
	I1222 01:33:34.958469 2116786 cri.go:96] found id: "ee1dc9c8ebfd92a5a6c114d952ca8de817933c150946b2636677ec572aa63702"
	I1222 01:33:34.958498 2116786 cri.go:96] found id: "6458377dd96fa4bd80200d2466d86226b03550784e2c540f23e3138350d286c4"
	I1222 01:33:34.958526 2116786 cri.go:96] found id: "213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e"
	I1222 01:33:34.958542 2116786 cri.go:96] found id: "53cf84266072177d217ef2457215faa2f0d9e68a7d7cb456e860ff12d6878b3c"
	I1222 01:33:34.958557 2116786 cri.go:96] found id: ""
	I1222 01:33:34.958632 2116786 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:33:34.972686 2116786 out.go:203] 
	W1222 01:33:34.975792 2116786 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:33:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:33:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 01:33:34.975860 2116786 out.go:285] * 
	* 
	W1222 01:33:34.996448 2116786 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:33:35.008514 2116786 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-535197 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-535197
helpers_test.go:244: (dbg) docker inspect old-k8s-version-535197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1",
	        "Created": "2025-12-22T01:31:08.113695685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2110806,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:32:27.52551515Z",
	            "FinishedAt": "2025-12-22T01:32:26.682445299Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1-json.log",
	        "Name": "/old-k8s-version-535197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-535197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-535197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1",
	                "LowerDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-535197",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-535197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-535197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-535197",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-535197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e1c27da9562cbf43be7a07639b51646cc1ac89e25508c5a88707ed70c6fe984",
	            "SandboxKey": "/var/run/docker/netns/1e1c27da9562",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38746"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38747"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38750"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38748"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38749"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-535197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:2d:ad:6c:49:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5aea71c8f08ab22f4aef4ac9bf8714a6f83176256b6002b76a15250fc2c9efb0",
	                    "EndpointID": "0420f8cc7980d0c6431414d4eb594d26f27da60b45c210b4d88ac6d37d8c9ec8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-535197",
	                        "d786ec4bcd2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-535197 -n old-k8s-version-535197
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-535197 -n old-k8s-version-535197: exit status 2 (330.759544ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-535197 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-535197 logs -n 25: (1.336136938s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ delete  │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ ssh     │ -p NoKubernetes-346588 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │                     │
	│ stop    │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p NoKubernetes-346588 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ ssh     │ -p NoKubernetes-346588 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │                     │
	│ delete  │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p force-systemd-flag-314223 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ force-systemd-flag-314223 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p force-systemd-flag-314223                                                                                                                                                                                                                  │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ start   │ -p cert-options-433911 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ cert-options-433911 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ -p cert-options-433911 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p cert-options-433911                                                                                                                                                                                                                        │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:31 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:31 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ stop    │ -p old-k8s-version-535197 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-535197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ delete  │ -p cert-expiration-031738                                                                                                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-491054         │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                               │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:32:54
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:32:54.033494 2113859 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:32:54.033719 2113859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:32:54.033749 2113859 out.go:374] Setting ErrFile to fd 2...
	I1222 01:32:54.033771 2113859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:32:54.034085 2113859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:32:54.034535 2113859 out.go:368] Setting JSON to false
	I1222 01:32:54.035756 2113859 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116123,"bootTime":1766251051,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:32:54.035869 2113859 start.go:143] virtualization:  
	I1222 01:32:54.039623 2113859 out.go:179] * [no-preload-491054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:32:54.042831 2113859 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:32:54.042915 2113859 notify.go:221] Checking for updates...
	I1222 01:32:54.049207 2113859 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:32:54.052271 2113859 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:32:54.055189 2113859 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:32:54.058044 2113859 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:32:54.060848 2113859 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:32:54.064390 2113859 config.go:182] Loaded profile config "old-k8s-version-535197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1222 01:32:54.064526 2113859 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:32:54.095516 2113859 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:32:54.095679 2113859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:32:54.165511 2113859 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:32:54.155633193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:32:54.165626 2113859 docker.go:319] overlay module found
	I1222 01:32:54.168797 2113859 out.go:179] * Using the docker driver based on user configuration
	I1222 01:32:54.171717 2113859 start.go:309] selected driver: docker
	I1222 01:32:54.171740 2113859 start.go:928] validating driver "docker" against <nil>
	I1222 01:32:54.171755 2113859 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:32:54.172503 2113859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:32:54.241624 2113859 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:32:54.231791653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:32:54.241776 2113859 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:32:54.242087 2113859 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:32:54.245038 2113859 out.go:179] * Using Docker driver with root privileges
	I1222 01:32:54.247987 2113859 cni.go:84] Creating CNI manager for ""
	I1222 01:32:54.248055 2113859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:32:54.248070 2113859 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:32:54.248154 2113859 start.go:353] cluster config:
	{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:32:54.251532 2113859 out.go:179] * Starting "no-preload-491054" primary control-plane node in "no-preload-491054" cluster
	I1222 01:32:54.254388 2113859 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:32:54.257314 2113859 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:32:54.260261 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:32:54.260351 2113859 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:32:54.260541 2113859 cache.go:107] acquiring lock: {Name:mkce8c58631c5f0d23bf68d20d333345f25cc3e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.260625 2113859 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:32:54.260643 2113859 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.938µs
	I1222 01:32:54.260658 2113859 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:32:54.260673 2113859 cache.go:107] acquiring lock: {Name:mk4201be15d3fc5460d5a3841ee13ef2c95ea347 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.260776 2113859 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:32:54.260964 2113859 cache.go:107] acquiring lock: {Name:mkf2e3bf6867cb15e485fef5b9e233e7559c99a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261043 2113859 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:32:54.261216 2113859 cache.go:107] acquiring lock: {Name:mkb8446b3e0ab7883b46b478905fd798f55f01e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261418 2113859 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:32:54.261616 2113859 cache.go:107] acquiring lock: {Name:mk77482801a8c57f7731abc5d9887216f12ba8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261720 2113859 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:32:54.261881 2113859 cache.go:107] acquiring lock: {Name:mk692f79fde95c3755a4408e63ab002578541c74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261943 2113859 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:32:54.261958 2113859 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 81.179µs
	I1222 01:32:54.261966 2113859 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:32:54.261984 2113859 cache.go:107] acquiring lock: {Name:mkd233ea519aa3a2daed97ba862dcfb34d3be39d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.262064 2113859 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:32:54.262213 2113859 cache.go:107] acquiring lock: {Name:mk309b91ac5e9b8dc5721727680b39f391c5db3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.262321 2113859 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:32:54.263575 2113859 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:32:54.263641 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json: {Name:mk7f1ee0d07d56b850c7f1ba7b969a78d9e732b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:32:54.264812 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:32:54.266342 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:32:54.266586 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:32:54.267191 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:32:54.267585 2113859 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:32:54.267880 2113859 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:32:54.287070 2113859 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:32:54.287091 2113859 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:32:54.287105 2113859 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:32:54.287135 2113859 start.go:360] acquireMachinesLock for no-preload-491054: {Name:mk963c2c890d85a550b13090cd7467d59e3caf6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.287228 2113859 start.go:364] duration metric: took 78.931µs to acquireMachinesLock for "no-preload-491054"
	I1222 01:32:54.287257 2113859 start.go:93] Provisioning new machine with config: &{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:32:54.287327 2113859 start.go:125] createHost starting for "" (driver="docker")
	W1222 01:32:53.457546 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:32:55.458888 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:32:54.290930 2113859 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:32:54.291159 2113859 start.go:159] libmachine.API.Create for "no-preload-491054" (driver="docker")
	I1222 01:32:54.291195 2113859 client.go:173] LocalClient.Create starting
	I1222 01:32:54.291269 2113859 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:32:54.291299 2113859 main.go:144] libmachine: Decoding PEM data...
	I1222 01:32:54.291313 2113859 main.go:144] libmachine: Parsing certificate...
	I1222 01:32:54.291370 2113859 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:32:54.291392 2113859 main.go:144] libmachine: Decoding PEM data...
	I1222 01:32:54.291403 2113859 main.go:144] libmachine: Parsing certificate...
	I1222 01:32:54.291849 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:32:54.316750 2113859 cli_runner.go:211] docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:32:54.316834 2113859 network_create.go:284] running [docker network inspect no-preload-491054] to gather additional debugging logs...
	I1222 01:32:54.316855 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054
	W1222 01:32:54.335649 2113859 cli_runner.go:211] docker network inspect no-preload-491054 returned with exit code 1
	I1222 01:32:54.335682 2113859 network_create.go:287] error running [docker network inspect no-preload-491054]: docker network inspect no-preload-491054: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-491054 not found
	I1222 01:32:54.335696 2113859 network_create.go:289] output of [docker network inspect no-preload-491054]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-491054 not found
	
	** /stderr **
	I1222 01:32:54.335794 2113859 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:32:54.356466 2113859 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:32:54.356865 2113859 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:32:54.357252 2113859 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:32:54.357789 2113859 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a487f0}
	I1222 01:32:54.357815 2113859 network_create.go:124] attempt to create docker network no-preload-491054 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:32:54.357887 2113859 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-491054 no-preload-491054
	I1222 01:32:54.441852 2113859 network_create.go:108] docker network no-preload-491054 192.168.76.0/24 created
	I1222 01:32:54.441881 2113859 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-491054" container
	I1222 01:32:54.441967 2113859 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:32:54.458632 2113859 cli_runner.go:164] Run: docker volume create no-preload-491054 --label name.minikube.sigs.k8s.io=no-preload-491054 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:32:54.478073 2113859 oci.go:103] Successfully created a docker volume no-preload-491054
	I1222 01:32:54.478163 2113859 cli_runner.go:164] Run: docker run --rm --name no-preload-491054-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-491054 --entrypoint /usr/bin/test -v no-preload-491054:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:32:54.618166 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:32:54.624668 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:32:54.627006 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:32:54.628218 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 01:32:54.636973 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:32:54.648281 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:32:55.019288 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:32:55.019318 2113859 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 757.706011ms
	I1222 01:32:55.019334 2113859 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:32:55.181451 2113859 oci.go:107] Successfully prepared a docker volume no-preload-491054
	I1222 01:32:55.181497 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	W1222 01:32:55.181621 2113859 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:32:55.181770 2113859 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:32:55.246556 2113859 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-491054 --name no-preload-491054 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-491054 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-491054 --network no-preload-491054 --ip 192.168.76.2 --volume no-preload-491054:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:32:55.548149 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:32:55.548173 2113859 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.286961516s
	I1222 01:32:55.548187 2113859 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:32:55.637916 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:32:55.637949 2113859 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.376987849s
	I1222 01:32:55.637962 2113859 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:32:55.644590 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:32:55.644659 2113859 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.382447056s
	I1222 01:32:55.644686 2113859 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:32:55.645277 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:32:55.645297 2113859 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 1.383313901s
	I1222 01:32:55.645307 2113859 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:32:55.676721 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Running}}
	I1222 01:32:55.706143 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:55.728682 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:32:55.732697 2113859 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.47201229s
	I1222 01:32:55.733125 2113859 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:32:55.733195 2113859 cache.go:87] Successfully saved all images to host disk.
	I1222 01:32:55.742044 2113859 cli_runner.go:164] Run: docker exec no-preload-491054 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:32:55.793131 2113859 oci.go:144] the created container "no-preload-491054" has a running status.
	I1222 01:32:55.793157 2113859 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa...
	I1222 01:32:56.150782 2113859 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:32:56.176874 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:56.199648 2113859 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:32:56.199777 2113859 kic_runner.go:114] Args: [docker exec --privileged no-preload-491054 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:32:56.287457 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:56.317221 2113859 machine.go:94] provisionDockerMachine start ...
	I1222 01:32:56.317311 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:56.348674 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:56.349032 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:56.349043 2113859 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:32:56.349725 2113859 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56696->127.0.0.1:38751: read: connection reset by peer
	W1222 01:32:57.461737 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:32:59.954969 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:33:01.958386 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:32:59.497388 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:32:59.497478 2113859 ubuntu.go:182] provisioning hostname "no-preload-491054"
	I1222 01:32:59.497575 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:59.521272 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:59.521594 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:59.521607 2113859 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-491054 && echo "no-preload-491054" | sudo tee /etc/hostname
	I1222 01:32:59.673216 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:32:59.673380 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:59.713978 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:59.714276 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:59.714295 2113859 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491054/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:32:59.877634 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:32:59.877663 2113859 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:32:59.877705 2113859 ubuntu.go:190] setting up certificates
	I1222 01:32:59.877730 2113859 provision.go:84] configureAuth start
	I1222 01:32:59.877825 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:32:59.915162 2113859 provision.go:143] copyHostCerts
	I1222 01:32:59.915223 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:32:59.915233 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:32:59.915300 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:32:59.915379 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:32:59.915385 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:32:59.915410 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:32:59.915457 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:32:59.915462 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:32:59.915485 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:32:59.915527 2113859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.no-preload-491054 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-491054]
	I1222 01:33:00.019297 2113859 provision.go:177] copyRemoteCerts
	I1222 01:33:00.019469 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:33:00.019545 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.043873 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:00.239960 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:33:00.315095 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:33:00.352718 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:33:00.383851 2113859 provision.go:87] duration metric: took 506.093893ms to configureAuth
	I1222 01:33:00.383879 2113859 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:33:00.384086 2113859 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:33:00.384192 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.409890 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:33:00.410227 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:33:00.410243 2113859 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:33:00.748942 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:33:00.748964 2113859 machine.go:97] duration metric: took 4.431724259s to provisionDockerMachine
	I1222 01:33:00.748975 2113859 client.go:176] duration metric: took 6.457773322s to LocalClient.Create
	I1222 01:33:00.748997 2113859 start.go:167] duration metric: took 6.457839838s to libmachine.API.Create "no-preload-491054"
	I1222 01:33:00.749006 2113859 start.go:293] postStartSetup for "no-preload-491054" (driver="docker")
	I1222 01:33:00.749021 2113859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:33:00.749096 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:33:00.749155 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.768562 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:00.873634 2113859 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:33:00.877828 2113859 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:33:00.877854 2113859 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:33:00.877865 2113859 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:33:00.877922 2113859 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:33:00.878011 2113859 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:33:00.878120 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:33:00.885734 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:33:00.904977 2113859 start.go:296] duration metric: took 155.94443ms for postStartSetup
	I1222 01:33:00.905414 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:33:00.926393 2113859 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:33:00.926680 2113859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:33:00.926736 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.943814 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.043968 2113859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:33:01.049606 2113859 start.go:128] duration metric: took 6.762264752s to createHost
	I1222 01:33:01.049630 2113859 start.go:83] releasing machines lock for "no-preload-491054", held for 6.762389851s
	I1222 01:33:01.049702 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:33:01.074676 2113859 ssh_runner.go:195] Run: cat /version.json
	I1222 01:33:01.074732 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:01.074954 2113859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:33:01.075004 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:01.118316 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.119194 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.358638 2113859 ssh_runner.go:195] Run: systemctl --version
	I1222 01:33:01.368096 2113859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:33:01.437466 2113859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:33:01.443994 2113859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:33:01.444068 2113859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:33:01.492466 2113859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:33:01.492540 2113859 start.go:496] detecting cgroup driver to use...
	I1222 01:33:01.492587 2113859 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:33:01.492677 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:33:01.520160 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:33:01.553116 2113859 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:33:01.553278 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:33:01.578847 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:33:01.600093 2113859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:33:01.775882 2113859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:33:01.964970 2113859 docker.go:234] disabling docker service ...
	I1222 01:33:01.965089 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:33:01.996139 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:33:02.014677 2113859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:33:02.197851 2113859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:33:02.368250 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:33:02.384327 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:33:02.403515 2113859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:33:02.403650 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.415198 2113859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:33:02.415321 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.425372 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.434744 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.444077 2113859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:33:02.452445 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.461779 2113859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.476543 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.486290 2113859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:33:02.494924 2113859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:33:02.503372 2113859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:33:02.670149 2113859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:33:03.183820 2113859 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:33:03.183944 2113859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:33:03.191011 2113859 start.go:564] Will wait 60s for crictl version
	I1222 01:33:03.191125 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.195180 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:33:03.239592 2113859 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:33:03.239750 2113859 ssh_runner.go:195] Run: crio --version
	I1222 01:33:03.276484 2113859 ssh_runner.go:195] Run: crio --version
	I1222 01:33:03.325695 2113859 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:33:03.326970 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:33:03.347240 2113859 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:33:03.351832 2113859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:33:03.363978 2113859 kubeadm.go:884] updating cluster {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:33:03.364135 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:33:03.364193 2113859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:33:03.396066 2113859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1222 01:33:03.396096 2113859 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1222 01:33:03.396162 2113859 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:03.396375 2113859 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.396475 2113859 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.396559 2113859 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.396777 2113859 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.396934 2113859 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.397121 2113859 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.397230 2113859 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.400639 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.400925 2113859 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:03.401252 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.401482 2113859 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.401640 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.401776 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.401933 2113859 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.402175 2113859 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.651035 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1222 01:33:03.670795 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.672155 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.708386 2113859 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1222 01:33:03.708477 2113859 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.708554 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.737217 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.762779 2113859 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1222 01:33:03.762819 2113859 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.762930 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.763277 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.779160 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.853247 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.908757 2113859 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1222 01:33:03.908926 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:03.909017 2113859 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1222 01:33:03.909052 2113859 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.909090 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.908801 2113859 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.909157 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.949191 2113859 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1222 01:33:03.949417 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.949484 2113859 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.949532 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.979058 2113859 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1222 01:33:03.979103 2113859 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.979172 2113859 ssh_runner.go:195] Run: which crictl
	W1222 01:33:04.457206 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:33:06.967402 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:33:04.065704 2113859 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1222 01:33:04.065852 2113859 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.065919 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.066007 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:04.066112 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:04.066183 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.066412 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.066289 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:04.066318 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.232150 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.232281 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.232367 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.232456 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:04.232537 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.232627 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.232706 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:04.432423 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:33:04.432541 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.432706 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.432778 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1222 01:33:04.432869 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.432971 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.433075 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.433133 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.433231 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:04.577867 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.577952 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1222 01:33:04.578063 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:33:04.578171 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:04.578298 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.578396 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1222 01:33:04.578441 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1222 01:33:04.578533 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:04.578614 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:04.578711 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:04.578818 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:04.578938 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:33:04.579022 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:04.685252 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.685290 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1222 01:33:04.685411 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1222 01:33:04.685432 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1222 01:33:04.685507 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:04.685627 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:04.685705 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.685724 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1222 01:33:04.685827 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1222 01:33:04.685868 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	W1222 01:33:04.721125 2113859 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1222 01:33:04.721420 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:04.764232 2113859 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.764322 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.764378 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.764399 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1222 01:33:05.036404 2113859 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1222 01:33:05.036893 2113859 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:05.036972 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:05.413289 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1222 01:33:05.413338 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:05.413379 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:05.413450 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:07.584416 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (2.171016792s)
	I1222 01:33:07.584440 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1222 01:33:07.584457 2113859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:07.584508 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:07.584579 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.171103181s)
	I1222 01:33:07.584616 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:08.872326 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.287797891s)
	I1222 01:33:08.872352 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1222 01:33:08.872368 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:08.872423 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:08.872487 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.287861782s)
	I1222 01:33:08.872519 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1222 01:33:09.454517 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:33:11.454710 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:33:10.300919 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.428379011s)
	I1222 01:33:10.300966 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1222 01:33:10.301066 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:10.301154 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.428721212s)
	I1222 01:33:10.301167 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1222 01:33:10.301181 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:10.301207 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:11.471273 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.17003853s)
	I1222 01:33:11.471300 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1222 01:33:11.471326 2113859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:11.471376 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:11.471466 2113859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.170385514s)
	I1222 01:33:11.471503 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1222 01:33:11.471522 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1222 01:33:13.518323 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (2.046913435s)
	I1222 01:33:13.518351 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1222 01:33:13.518369 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:13.518418 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	W1222 01:33:13.454974 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:33:15.455565 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:33:15.056778 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.538332463s)
	I1222 01:33:15.056802 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1222 01:33:15.056819 2113859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:15.056877 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:15.669239 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1222 01:33:15.669279 2113859 cache_images.go:125] Successfully loaded all cached images
	I1222 01:33:15.669285 2113859 cache_images.go:94] duration metric: took 12.273177254s to LoadCachedImages
	I1222 01:33:15.669298 2113859 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:33:15.669428 2113859 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:33:15.669521 2113859 ssh_runner.go:195] Run: crio config
	I1222 01:33:15.766267 2113859 cni.go:84] Creating CNI manager for ""
	I1222 01:33:15.766317 2113859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:33:15.766335 2113859 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:33:15.766360 2113859 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491054 NodeName:no-preload-491054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:33:15.766489 2113859 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491054"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:33:15.766568 2113859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:33:15.775000 2113859 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1222 01:33:15.775076 2113859 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:33:15.783792 2113859 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1222 01:33:15.783912 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1222 01:33:15.784829 2113859 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet
	I1222 01:33:15.785387 2113859 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm
	I1222 01:33:15.789407 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1222 01:33:15.789452 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1222 01:33:16.870782 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:33:16.893005 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1222 01:33:16.897126 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1222 01:33:16.897163 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1222 01:33:17.155180 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1222 01:33:17.172327 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1222 01:33:17.172369 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1222 01:33:17.633126 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:33:17.641170 2113859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:33:17.657290 2113859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:33:17.672129 2113859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 01:33:17.686042 2113859 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:33:17.690564 2113859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:33:17.701363 2113859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:33:17.830376 2113859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:33:17.851150 2113859 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054 for IP: 192.168.76.2
	I1222 01:33:17.851171 2113859 certs.go:195] generating shared ca certs ...
	I1222 01:33:17.851188 2113859 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.851322 2113859 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:33:17.851370 2113859 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:33:17.851382 2113859 certs.go:257] generating profile certs ...
	I1222 01:33:17.851434 2113859 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key
	I1222 01:33:17.851451 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt with IP's: []
	I1222 01:33:17.979141 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt ...
	I1222 01:33:17.979169 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: {Name:mkcc53105d7ba3a27d1d6f12bb0c9eed295215c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.979387 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key ...
	I1222 01:33:17.979402 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key: {Name:mke07de840627684a57c31550e8378dc9e95cd66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.979495 2113859 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a
	I1222 01:33:17.979512 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:33:18.564226 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a ...
	I1222 01:33:18.564256 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a: {Name:mkd8e629f62eab359f7658bb5f9017eca71fb865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.564456 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a ...
	I1222 01:33:18.564471 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a: {Name:mke0f2bc960eb03fd7ec564dbd0be938a1e53d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.564555 2113859 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt
	I1222 01:33:18.564630 2113859 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key
	I1222 01:33:18.564692 2113859 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key
	I1222 01:33:18.564710 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt with IP's: []
	I1222 01:33:18.841279 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt ...
	I1222 01:33:18.841307 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt: {Name:mkccabd6e6a4f8a64f07dedbf98fb912005ff0f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.841487 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key ...
	I1222 01:33:18.841501 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key: {Name:mk865b3aaa0336b2f3bf58a8de85bdb9588e55ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.841685 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:33:18.841733 2113859 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:33:18.841746 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:33:18.841773 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:33:18.841800 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:33:18.841864 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:33:18.841917 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:33:18.842483 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:33:18.860286 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:33:18.878323 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:33:18.900560 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:33:18.920760 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:33:18.940517 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:33:18.962227 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:33:18.980595 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:33:19.000729 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:33:19.021971 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:33:19.040728 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:33:19.058835 2113859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:33:19.072001 2113859 ssh_runner.go:195] Run: openssl version
	I1222 01:33:19.079871 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.094194 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:33:19.108368 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.112634 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.112704 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.156081 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:33:19.163807 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:33:19.171280 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.178589 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:33:19.186351 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.190135 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.190200 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.231471 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:33:19.239260 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:33:19.246949 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.254373 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:33:19.262303 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.266149 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.266256 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.308645 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:33:19.316489 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:33:19.324189 2113859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:33:19.327847 2113859 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:33:19.327926 2113859 kubeadm.go:401] StartCluster: {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:33:19.328012 2113859 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:33:19.328077 2113859 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:33:19.367450 2113859 cri.go:96] found id: ""
	I1222 01:33:19.367568 2113859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:33:19.379376 2113859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:33:19.390640 2113859 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:33:19.390726 2113859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:33:19.407000 2113859 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:33:19.407021 2113859 kubeadm.go:158] found existing configuration files:
	
	I1222 01:33:19.407093 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:33:19.414853 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:33:19.414947 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:33:19.423608 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:33:19.431372 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:33:19.431496 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:33:19.439057 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:33:19.447157 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:33:19.447221 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:33:19.455751 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:33:19.463261 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:33:19.463374 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:33:19.470637 2113859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:33:19.512148 2113859 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:33:19.512520 2113859 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:33:19.593167 2113859 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:33:19.593327 2113859 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:33:19.593393 2113859 kubeadm.go:319] OS: Linux
	I1222 01:33:19.593474 2113859 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:33:19.593555 2113859 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:33:19.593644 2113859 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:33:19.593721 2113859 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:33:19.593798 2113859 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:33:19.593869 2113859 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:33:19.593949 2113859 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:33:19.594020 2113859 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:33:19.594096 2113859 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:33:19.662779 2113859 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:33:19.662894 2113859 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:33:19.662988 2113859 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:33:19.682584 2113859 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1222 01:33:17.958734 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:33:19.954525 2110678 pod_ready.go:94] pod "coredns-5dd5756b68-6hgk8" is "Ready"
	I1222 01:33:19.954559 2110678 pod_ready.go:86] duration metric: took 35.506360771s for pod "coredns-5dd5756b68-6hgk8" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.957669 2110678 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.962500 2110678 pod_ready.go:94] pod "etcd-old-k8s-version-535197" is "Ready"
	I1222 01:33:19.962529 2110678 pod_ready.go:86] duration metric: took 4.830952ms for pod "etcd-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.965352 2110678 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.969832 2110678 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-535197" is "Ready"
	I1222 01:33:19.969862 2110678 pod_ready.go:86] duration metric: took 4.484041ms for pod "kube-apiserver-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.972422 2110678 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:20.156732 2110678 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-535197" is "Ready"
	I1222 01:33:20.156816 2110678 pod_ready.go:86] duration metric: took 184.367283ms for pod "kube-controller-manager-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:20.353364 2110678 pod_ready.go:83] waiting for pod "kube-proxy-lfhlv" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:20.752210 2110678 pod_ready.go:94] pod "kube-proxy-lfhlv" is "Ready"
	I1222 01:33:20.752242 2110678 pod_ready.go:86] duration metric: took 398.849907ms for pod "kube-proxy-lfhlv" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:20.952052 2110678 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:21.352137 2110678 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-535197" is "Ready"
	I1222 01:33:21.352170 2110678 pod_ready.go:86] duration metric: took 400.085029ms for pod "kube-scheduler-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:21.352184 2110678 pod_ready.go:40] duration metric: took 36.910285031s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:33:21.443683 2110678 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1222 01:33:21.446881 2110678 out.go:203] 
	W1222 01:33:21.449856 2110678 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1222 01:33:21.452799 2110678 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1222 01:33:21.458527 2110678 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-535197" cluster and "default" namespace by default
	I1222 01:33:19.688410 2113859 out.go:252]   - Generating certificates and keys ...
	I1222 01:33:19.688524 2113859 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:33:19.688595 2113859 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:33:20.151895 2113859 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:33:20.401703 2113859 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:33:21.005040 2113859 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:33:21.140685 2113859 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:33:21.824673 2113859 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:33:21.825252 2113859 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:33:22.020671 2113859 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:33:22.020933 2113859 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:33:22.474306 2113859 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:33:22.884092 2113859 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:33:23.034492 2113859 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:33:23.034774 2113859 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:33:23.075352 2113859 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:33:23.135102 2113859 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:33:23.251153 2113859 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:33:23.986932 2113859 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:33:24.273870 2113859 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:33:24.274658 2113859 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:33:24.277328 2113859 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:33:24.281021 2113859 out.go:252]   - Booting up control plane ...
	I1222 01:33:24.281119 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:33:24.281198 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:33:24.281264 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:33:24.298176 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:33:24.298295 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:33:24.306342 2113859 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:33:24.306969 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:33:24.307038 2113859 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:33:24.444902 2113859 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:33:24.445025 2113859 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.840463265Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b43f29d-8f77-4db5-b7e3-29e3aad4e404 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.841672295Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9c660f8e-0a1b-4aab-bd7a-5d4f7825c03a name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.843451278Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz/dashboard-metrics-scraper" id=ecd706ca-ec69-4914-ad93-aa4200b33d11 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.84356937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.861743557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.86385688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.927997333Z" level=info msg="Created container 213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz/dashboard-metrics-scraper" id=ecd706ca-ec69-4914-ad93-aa4200b33d11 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.930953346Z" level=info msg="Starting container: 213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e" id=1060593a-a47b-4488-89f2-37d632af1a52 name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.934701114Z" level=info msg="Started container" PID=1642 containerID=213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz/dashboard-metrics-scraper id=1060593a-a47b-4488-89f2-37d632af1a52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=60dc2140434a0c650ba2f4206eb54015ea94958398d65f57054faa6ffe4e60d0
	Dec 22 01:33:17 old-k8s-version-535197 conmon[1640]: conmon 213d935ce1b4c5a81f8b <ninfo>: container 1642 exited with status 1
	Dec 22 01:33:18 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:18.085237223Z" level=info msg="Removing container: dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7" id=0cb652c8-0ebc-41ec-a276-451a733f0b42 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:33:18 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:18.099900616Z" level=info msg="Error loading conmon cgroup of container dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7: cgroup deleted" id=0cb652c8-0ebc-41ec-a276-451a733f0b42 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:33:18 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:18.104224752Z" level=info msg="Removed container dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz/dashboard-metrics-scraper" id=0cb652c8-0ebc-41ec-a276-451a733f0b42 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.806305077Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.84106561Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.841281365Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.841397571Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.849431256Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.849598989Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.849688955Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.865527869Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.865704775Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.865785076Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.874901535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.875059331Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	213d935ce1b4c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   60dc2140434a0       dashboard-metrics-scraper-5f989dc9cf-sc9fz       kubernetes-dashboard
	2cdb0c07e9744       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   d94ba303a338e       storage-provisioner                              kube-system
	53cf842660721       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago       Running             kubernetes-dashboard        0                   ffed5d7f89a72       kubernetes-dashboard-8694d4445c-bhzmc            kubernetes-dashboard
	b58fe534665d4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   cfe1c6442f026       coredns-5dd5756b68-6hgk8                         kube-system
	b2b297128099a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   853cc164dc835       kube-proxy-lfhlv                                 kube-system
	5e1d3bdf14e11       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   6ffb08bea5224       busybox                                          default
	336bb7045b1cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   d94ba303a338e       storage-provisioner                              kube-system
	e20ee4063c483       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           54 seconds ago       Running             kindnet-cni                 1                   95fcb69c1861a       kindnet-bf7tv                                    kube-system
	71f0891781692       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   8cf8be051cd0e       kube-controller-manager-old-k8s-version-535197   kube-system
	58d08a0c57716       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   bd10dfc5e5e50       kube-apiserver-old-k8s-version-535197            kube-system
	ee1dc9c8ebfd9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   ee49aec7d8932       etcd-old-k8s-version-535197                      kube-system
	6458377dd96fa       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   8cb2d5768fb29       kube-scheduler-old-k8s-version-535197            kube-system
	
	
	==> coredns [b58fe534665d48104bc15d15f4c6f0ece29c6665e8419ee2a5bdd0272b03c474] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41875 - 18211 "HINFO IN 1988241562069470050.4449834520038572590. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00570071s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-535197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-535197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=old-k8s-version-535197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_31_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:31:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-535197
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:33:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:33:10 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:33:10 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:33:10 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:33:10 +0000   Mon, 22 Dec 2025 01:31:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-535197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                7cd106b7-7f02-4dbd-b127-b7707e12d39f
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-6hgk8                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-535197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-bf7tv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-535197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-535197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-lfhlv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-535197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-sc9fz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bhzmc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-535197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-535197 event: Registered Node old-k8s-version-535197 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-535197 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node old-k8s-version-535197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node old-k8s-version-535197 event: Registered Node old-k8s-version-535197 in Controller
	
	
	==> dmesg <==
	[Dec22 01:00] overlayfs: idmapped layers are currently not supported
	[Dec22 01:04] overlayfs: idmapped layers are currently not supported
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ee1dc9c8ebfd92a5a6c114d952ca8de817933c150946b2636677ec572aa63702] <==
	{"level":"info","ts":"2025-12-22T01:32:35.842667Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-22T01:32:35.842675Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-22T01:32:35.842986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-22T01:32:35.843042Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-22T01:32:35.843118Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-22T01:32:35.843142Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-22T01:32:35.881868Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-22T01:32:35.882117Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-22T01:32:35.882169Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-22T01:32:35.882244Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-22T01:32:35.882275Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-22T01:32:37.421212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-22T01:32:37.421259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-22T01:32:37.421287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-22T01:32:37.4213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-22T01:32:37.421307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-22T01:32:37.421317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-22T01:32:37.421354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-22T01:32:37.429525Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-535197 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-22T01:32:37.429657Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-22T01:32:37.430649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-22T01:32:37.431312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-22T01:32:37.432198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-22T01:32:37.439444Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-22T01:32:37.439529Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:33:36 up 1 day,  8:16,  0 user,  load average: 3.82, 3.20, 2.49
	Linux old-k8s-version-535197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e20ee4063c483434ea2065bdae477712ff0bd65c9cc1905a7305884c3a1d5a61] <==
	I1222 01:32:41.539289       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:32:41.539472       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:32:41.539591       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:32:41.539602       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:32:41.539614       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:32:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:32:41.803203       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:32:41.824547       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:32:41.824675       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:32:41.825659       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1222 01:33:11.804062       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1222 01:33:11.825691       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1222 01:33:11.825904       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1222 01:33:11.826077       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1222 01:33:13.025684       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:33:13.025780       1 metrics.go:72] Registering metrics
	I1222 01:33:13.025882       1 controller.go:711] "Syncing nftables rules"
	I1222 01:33:21.805134       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:33:21.805173       1 main.go:301] handling current node
	I1222 01:33:31.803431       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:33:31.803460       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58d08a0c5771635b028dd68e96fe171c409dd75d3491d6f68517244657c139ff] <==
	I1222 01:32:40.283417       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1222 01:32:40.283648       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1222 01:32:40.285285       1 aggregator.go:166] initial CRD sync complete...
	I1222 01:32:40.285378       1 autoregister_controller.go:141] Starting autoregister controller
	I1222 01:32:40.285408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1222 01:32:40.285438       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:32:40.286520       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1222 01:32:40.304228       1 shared_informer.go:318] Caches are synced for configmaps
	I1222 01:32:40.304448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1222 01:32:40.344674       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1222 01:32:40.344706       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1222 01:32:40.348891       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:32:40.353421       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1222 01:32:40.418574       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1222 01:32:40.719639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:32:43.884513       1 controller.go:624] quota admission added evaluator for: namespaces
	I1222 01:32:43.957152       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1222 01:32:44.001488       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:32:44.032320       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:32:44.127356       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1222 01:32:44.331952       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.97.199"}
	I1222 01:32:44.354253       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.226.17"}
	I1222 01:32:53.306010       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1222 01:32:53.556909       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:32:53.734405       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [71f0891781692fab9a2c32b04d877bfefe34482d11319ba2640e98531c0dc00a] <==
	I1222 01:32:53.394789       1 shared_informer.go:318] Caches are synced for resource quota
	I1222 01:32:53.685648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="457.072187ms"
	I1222 01:32:53.690656       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.799µs"
	I1222 01:32:53.703402       1 shared_informer.go:318] Caches are synced for garbage collector
	I1222 01:32:53.703521       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1222 01:32:53.705221       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-bhzmc"
	I1222 01:32:53.710505       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-sc9fz"
	I1222 01:32:53.754153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="438.355903ms"
	I1222 01:32:53.755779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="427.703623ms"
	I1222 01:32:53.786267       1 shared_informer.go:318] Caches are synced for garbage collector
	I1222 01:32:53.787454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="33.151071ms"
	I1222 01:32:53.787624       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.129µs"
	I1222 01:32:53.801597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.428789ms"
	I1222 01:32:53.801765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.982µs"
	I1222 01:32:53.802067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.012µs"
	I1222 01:32:53.886554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.692µs"
	I1222 01:33:01.028487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.851µs"
	I1222 01:33:02.054672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.381µs"
	I1222 01:33:03.048760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.895µs"
	I1222 01:33:08.082706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.882936ms"
	I1222 01:33:08.082901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.313µs"
	I1222 01:33:19.105782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.293µs"
	I1222 01:33:19.769052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.593583ms"
	I1222 01:33:19.770194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.673µs"
	I1222 01:33:25.865791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.99µs"
	
	
	==> kube-proxy [b2b297128099a31b08568ff2228dd1cd7119453d1eba86c528fe063e19bda8c0] <==
	I1222 01:32:42.170704       1 server_others.go:69] "Using iptables proxy"
	I1222 01:32:42.258998       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1222 01:32:42.442907       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:32:42.450591       1 server_others.go:152] "Using iptables Proxier"
	I1222 01:32:42.450691       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1222 01:32:42.450723       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1222 01:32:42.450771       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1222 01:32:42.451017       1 server.go:846] "Version info" version="v1.28.0"
	I1222 01:32:42.451275       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:32:42.451992       1 config.go:188] "Starting service config controller"
	I1222 01:32:42.452071       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1222 01:32:42.452117       1 config.go:97] "Starting endpoint slice config controller"
	I1222 01:32:42.452164       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1222 01:32:42.452653       1 config.go:315] "Starting node config controller"
	I1222 01:32:42.452713       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1222 01:32:42.553133       1 shared_informer.go:318] Caches are synced for node config
	I1222 01:32:42.553175       1 shared_informer.go:318] Caches are synced for service config
	I1222 01:32:42.553202       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6458377dd96fa4bd80200d2466d86226b03550784e2c540f23e3138350d286c4] <==
	I1222 01:32:38.049983       1 serving.go:348] Generated self-signed cert in-memory
	W1222 01:32:40.104843       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1222 01:32:40.104967       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1222 01:32:40.105003       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1222 01:32:40.109432       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1222 01:32:40.210405       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1222 01:32:40.210449       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:32:40.212263       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1222 01:32:40.212376       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:32:40.212397       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1222 01:32:40.212416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1222 01:32:40.274001       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1222 01:32:40.274120       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1222 01:32:40.298187       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1222 01:32:40.298225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1222 01:32:40.298415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1222 01:32:40.298429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	I1222 01:32:41.212711       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.962137     783 projected.go:198] Error preparing data for projected volume kube-api-access-8k56c for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz: failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.962284     783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/16d0eb14-b3f7-4ba4-9855-8ffd99e83905-kube-api-access-8k56c podName:16d0eb14-b3f7-4ba4-9855-8ffd99e83905 nodeName:}" failed. No retries permitted until 2025-12-22 01:32:55.46225276 +0000 UTC m=+20.819772161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8k56c" (UniqueName: "kubernetes.io/projected/16d0eb14-b3f7-4ba4-9855-8ffd99e83905-kube-api-access-8k56c") pod "dashboard-metrics-scraper-5f989dc9cf-sc9fz" (UID: "16d0eb14-b3f7-4ba4-9855-8ffd99e83905") : failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.961684     783 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.962460     783 projected.go:198] Error preparing data for projected volume kube-api-access-z7s4v for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bhzmc: failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.962547     783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/143daf7b-4993-46cb-bc29-44e6d8f97352-kube-api-access-z7s4v podName:143daf7b-4993-46cb-bc29-44e6d8f97352 nodeName:}" failed. No retries permitted until 2025-12-22 01:32:55.462534606 +0000 UTC m=+20.820054007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z7s4v" (UniqueName: "kubernetes.io/projected/143daf7b-4993-46cb-bc29-44e6d8f97352-kube-api-access-z7s4v") pod "kubernetes-dashboard-8694d4445c-bhzmc" (UID: "143daf7b-4993-46cb-bc29-44e6d8f97352") : failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:55 old-k8s-version-535197 kubelet[783]: W1222 01:32:55.918775     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/crio-ffed5d7f89a7274cff785c73a9711235d17ba848f48ce8c354c308786538823e WatchSource:0}: Error finding container ffed5d7f89a7274cff785c73a9711235d17ba848f48ce8c354c308786538823e: Status 404 returned error can't find the container with id ffed5d7f89a7274cff785c73a9711235d17ba848f48ce8c354c308786538823e
	Dec 22 01:33:01 old-k8s-version-535197 kubelet[783]: I1222 01:33:01.014583     783 scope.go:117] "RemoveContainer" containerID="e9233c95ff47cd6b1d5213ba2a4d982955a23056a515a23175263074ba9fc037"
	Dec 22 01:33:02 old-k8s-version-535197 kubelet[783]: I1222 01:33:02.021559     783 scope.go:117] "RemoveContainer" containerID="e9233c95ff47cd6b1d5213ba2a4d982955a23056a515a23175263074ba9fc037"
	Dec 22 01:33:02 old-k8s-version-535197 kubelet[783]: I1222 01:33:02.030322     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:02 old-k8s-version-535197 kubelet[783]: E1222 01:33:02.033545     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:03 old-k8s-version-535197 kubelet[783]: I1222 01:33:03.028811     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:03 old-k8s-version-535197 kubelet[783]: E1222 01:33:03.029080     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:05 old-k8s-version-535197 kubelet[783]: I1222 01:33:05.851398     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:05 old-k8s-version-535197 kubelet[783]: E1222 01:33:05.851694     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:08 old-k8s-version-535197 kubelet[783]: I1222 01:33:08.070354     783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bhzmc" podStartSLOduration=3.287552187 podCreationTimestamp="2025-12-22 01:32:53 +0000 UTC" firstStartedPulling="2025-12-22 01:32:55.942058894 +0000 UTC m=+21.299578295" lastFinishedPulling="2025-12-22 01:33:07.724798315 +0000 UTC m=+33.082317715" observedRunningTime="2025-12-22 01:33:08.069702866 +0000 UTC m=+33.427222266" watchObservedRunningTime="2025-12-22 01:33:08.070291607 +0000 UTC m=+33.427811008"
	Dec 22 01:33:12 old-k8s-version-535197 kubelet[783]: I1222 01:33:12.066299     783 scope.go:117] "RemoveContainer" containerID="336bb7045b1cd554f929f862cf83e52a582bfa5bccb3fd3858ad9bbd6ca94f04"
	Dec 22 01:33:17 old-k8s-version-535197 kubelet[783]: I1222 01:33:17.839394     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:18 old-k8s-version-535197 kubelet[783]: I1222 01:33:18.083763     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:19 old-k8s-version-535197 kubelet[783]: I1222 01:33:19.088057     783 scope.go:117] "RemoveContainer" containerID="213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e"
	Dec 22 01:33:19 old-k8s-version-535197 kubelet[783]: E1222 01:33:19.088735     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:25 old-k8s-version-535197 kubelet[783]: I1222 01:33:25.851350     783 scope.go:117] "RemoveContainer" containerID="213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e"
	Dec 22 01:33:25 old-k8s-version-535197 kubelet[783]: E1222 01:33:25.851671     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:33 old-k8s-version-535197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 22 01:33:33 old-k8s-version-535197 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 22 01:33:33 old-k8s-version-535197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [53cf84266072177d217ef2457215faa2f0d9e68a7d7cb456e860ff12d6878b3c] <==
	2025/12/22 01:33:07 Using namespace: kubernetes-dashboard
	2025/12/22 01:33:07 Using in-cluster config to connect to apiserver
	2025/12/22 01:33:07 Using secret token for csrf signing
	2025/12/22 01:33:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/22 01:33:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/22 01:33:07 Successful initial request to the apiserver, version: v1.28.0
	2025/12/22 01:33:07 Generating JWE encryption key
	2025/12/22 01:33:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/22 01:33:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/22 01:33:08 Initializing JWE encryption key from synchronized object
	2025/12/22 01:33:08 Creating in-cluster Sidecar client
	2025/12/22 01:33:08 Serving insecurely on HTTP port: 9090
	2025/12/22 01:33:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:33:07 Starting overwatch
	
	
	==> storage-provisioner [2cdb0c07e974428487346498e876bb16c65cfa7e95e5303ed7f92aa135b676a5] <==
	I1222 01:33:12.187029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:33:12.211652       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:33:12.211781       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1222 01:33:29.613654       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:33:29.614106       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cce17c97-ab91-4b3b-aab2-fd0b22a9040e", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-535197_eb9cbd64-7af6-4c74-b0b8-24628253093d became leader
	I1222 01:33:29.614184       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-535197_eb9cbd64-7af6-4c74-b0b8-24628253093d!
	I1222 01:33:29.714611       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-535197_eb9cbd64-7af6-4c74-b0b8-24628253093d!
	
	
	==> storage-provisioner [336bb7045b1cd554f929f862cf83e52a582bfa5bccb3fd3858ad9bbd6ca94f04] <==
	I1222 01:32:41.615135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1222 01:33:11.642172       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-535197 -n old-k8s-version-535197
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-535197 -n old-k8s-version-535197: exit status 2 (369.580704ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-535197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-535197
helpers_test.go:244: (dbg) docker inspect old-k8s-version-535197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1",
	        "Created": "2025-12-22T01:31:08.113695685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2110806,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:32:27.52551515Z",
	            "FinishedAt": "2025-12-22T01:32:26.682445299Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1-json.log",
	        "Name": "/old-k8s-version-535197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-535197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-535197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1",
	                "LowerDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39e5910d3bc1f44bc28a33f8e2cf184d64af2fe009fb2d29a79ad94480b103cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-535197",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-535197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-535197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-535197",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-535197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e1c27da9562cbf43be7a07639b51646cc1ac89e25508c5a88707ed70c6fe984",
	            "SandboxKey": "/var/run/docker/netns/1e1c27da9562",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38746"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38747"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38750"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38748"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38749"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-535197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:2d:ad:6c:49:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5aea71c8f08ab22f4aef4ac9bf8714a6f83176256b6002b76a15250fc2c9efb0",
	                    "EndpointID": "0420f8cc7980d0c6431414d4eb594d26f27da60b45c210b4d88ac6d37d8c9ec8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-535197",
	                        "d786ec4bcd2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-535197 -n old-k8s-version-535197
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-535197 -n old-k8s-version-535197: exit status 2 (339.759657ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-535197 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-535197 logs -n 25: (1.261725682s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ delete  │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ ssh     │ -p NoKubernetes-346588 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │                     │
	│ stop    │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p NoKubernetes-346588 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ ssh     │ -p NoKubernetes-346588 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │                     │
	│ delete  │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p force-systemd-flag-314223 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ force-systemd-flag-314223 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p force-systemd-flag-314223                                                                                                                                                                                                                  │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ start   │ -p cert-options-433911 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ cert-options-433911 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ -p cert-options-433911 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p cert-options-433911                                                                                                                                                                                                                        │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:31 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:31 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ stop    │ -p old-k8s-version-535197 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-535197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ delete  │ -p cert-expiration-031738                                                                                                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-491054         │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                               │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:32:54
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:32:54.033494 2113859 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:32:54.033719 2113859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:32:54.033749 2113859 out.go:374] Setting ErrFile to fd 2...
	I1222 01:32:54.033771 2113859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:32:54.034085 2113859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:32:54.034535 2113859 out.go:368] Setting JSON to false
	I1222 01:32:54.035756 2113859 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116123,"bootTime":1766251051,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:32:54.035869 2113859 start.go:143] virtualization:  
	I1222 01:32:54.039623 2113859 out.go:179] * [no-preload-491054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:32:54.042831 2113859 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:32:54.042915 2113859 notify.go:221] Checking for updates...
	I1222 01:32:54.049207 2113859 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:32:54.052271 2113859 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:32:54.055189 2113859 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:32:54.058044 2113859 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:32:54.060848 2113859 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:32:54.064390 2113859 config.go:182] Loaded profile config "old-k8s-version-535197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1222 01:32:54.064526 2113859 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:32:54.095516 2113859 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:32:54.095679 2113859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:32:54.165511 2113859 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:32:54.155633193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:32:54.165626 2113859 docker.go:319] overlay module found
	I1222 01:32:54.168797 2113859 out.go:179] * Using the docker driver based on user configuration
	I1222 01:32:54.171717 2113859 start.go:309] selected driver: docker
	I1222 01:32:54.171740 2113859 start.go:928] validating driver "docker" against <nil>
	I1222 01:32:54.171755 2113859 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:32:54.172503 2113859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:32:54.241624 2113859 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:32:54.231791653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:32:54.241776 2113859 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:32:54.242087 2113859 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:32:54.245038 2113859 out.go:179] * Using Docker driver with root privileges
	I1222 01:32:54.247987 2113859 cni.go:84] Creating CNI manager for ""
	I1222 01:32:54.248055 2113859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:32:54.248070 2113859 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:32:54.248154 2113859 start.go:353] cluster config:
	{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:32:54.251532 2113859 out.go:179] * Starting "no-preload-491054" primary control-plane node in "no-preload-491054" cluster
	I1222 01:32:54.254388 2113859 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:32:54.257314 2113859 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:32:54.260261 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:32:54.260351 2113859 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:32:54.260541 2113859 cache.go:107] acquiring lock: {Name:mkce8c58631c5f0d23bf68d20d333345f25cc3e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.260625 2113859 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:32:54.260643 2113859 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.938µs
	I1222 01:32:54.260658 2113859 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:32:54.260673 2113859 cache.go:107] acquiring lock: {Name:mk4201be15d3fc5460d5a3841ee13ef2c95ea347 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.260776 2113859 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:32:54.260964 2113859 cache.go:107] acquiring lock: {Name:mkf2e3bf6867cb15e485fef5b9e233e7559c99a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261043 2113859 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:32:54.261216 2113859 cache.go:107] acquiring lock: {Name:mkb8446b3e0ab7883b46b478905fd798f55f01e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261418 2113859 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:32:54.261616 2113859 cache.go:107] acquiring lock: {Name:mk77482801a8c57f7731abc5d9887216f12ba8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261720 2113859 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:32:54.261881 2113859 cache.go:107] acquiring lock: {Name:mk692f79fde95c3755a4408e63ab002578541c74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.261943 2113859 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:32:54.261958 2113859 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 81.179µs
	I1222 01:32:54.261966 2113859 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:32:54.261984 2113859 cache.go:107] acquiring lock: {Name:mkd233ea519aa3a2daed97ba862dcfb34d3be39d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.262064 2113859 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:32:54.262213 2113859 cache.go:107] acquiring lock: {Name:mk309b91ac5e9b8dc5721727680b39f391c5db3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.262321 2113859 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:32:54.263575 2113859 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:32:54.263641 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json: {Name:mk7f1ee0d07d56b850c7f1ba7b969a78d9e732b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:32:54.264812 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:32:54.266342 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:32:54.266586 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:32:54.267191 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:32:54.267585 2113859 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:32:54.267880 2113859 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:32:54.287070 2113859 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:32:54.287091 2113859 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:32:54.287105 2113859 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:32:54.287135 2113859 start.go:360] acquireMachinesLock for no-preload-491054: {Name:mk963c2c890d85a550b13090cd7467d59e3caf6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:32:54.287228 2113859 start.go:364] duration metric: took 78.931µs to acquireMachinesLock for "no-preload-491054"
	I1222 01:32:54.287257 2113859 start.go:93] Provisioning new machine with config: &{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:32:54.287327 2113859 start.go:125] createHost starting for "" (driver="docker")
	W1222 01:32:53.457546 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:32:55.458888 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:32:54.290930 2113859 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:32:54.291159 2113859 start.go:159] libmachine.API.Create for "no-preload-491054" (driver="docker")
	I1222 01:32:54.291195 2113859 client.go:173] LocalClient.Create starting
	I1222 01:32:54.291269 2113859 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:32:54.291299 2113859 main.go:144] libmachine: Decoding PEM data...
	I1222 01:32:54.291313 2113859 main.go:144] libmachine: Parsing certificate...
	I1222 01:32:54.291370 2113859 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:32:54.291392 2113859 main.go:144] libmachine: Decoding PEM data...
	I1222 01:32:54.291403 2113859 main.go:144] libmachine: Parsing certificate...
	I1222 01:32:54.291849 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:32:54.316750 2113859 cli_runner.go:211] docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:32:54.316834 2113859 network_create.go:284] running [docker network inspect no-preload-491054] to gather additional debugging logs...
	I1222 01:32:54.316855 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054
	W1222 01:32:54.335649 2113859 cli_runner.go:211] docker network inspect no-preload-491054 returned with exit code 1
	I1222 01:32:54.335682 2113859 network_create.go:287] error running [docker network inspect no-preload-491054]: docker network inspect no-preload-491054: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-491054 not found
	I1222 01:32:54.335696 2113859 network_create.go:289] output of [docker network inspect no-preload-491054]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-491054 not found
	
	** /stderr **
	I1222 01:32:54.335794 2113859 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:32:54.356466 2113859 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:32:54.356865 2113859 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:32:54.357252 2113859 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:32:54.357789 2113859 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a487f0}
	I1222 01:32:54.357815 2113859 network_create.go:124] attempt to create docker network no-preload-491054 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:32:54.357887 2113859 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-491054 no-preload-491054
	I1222 01:32:54.441852 2113859 network_create.go:108] docker network no-preload-491054 192.168.76.0/24 created
	I1222 01:32:54.441881 2113859 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-491054" container
	I1222 01:32:54.441967 2113859 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:32:54.458632 2113859 cli_runner.go:164] Run: docker volume create no-preload-491054 --label name.minikube.sigs.k8s.io=no-preload-491054 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:32:54.478073 2113859 oci.go:103] Successfully created a docker volume no-preload-491054
	I1222 01:32:54.478163 2113859 cli_runner.go:164] Run: docker run --rm --name no-preload-491054-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-491054 --entrypoint /usr/bin/test -v no-preload-491054:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:32:54.618166 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:32:54.624668 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:32:54.627006 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:32:54.628218 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 01:32:54.636973 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:32:54.648281 2113859 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:32:55.019288 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:32:55.019318 2113859 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 757.706011ms
	I1222 01:32:55.019334 2113859 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:32:55.181451 2113859 oci.go:107] Successfully prepared a docker volume no-preload-491054
	I1222 01:32:55.181497 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	W1222 01:32:55.181621 2113859 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:32:55.181770 2113859 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:32:55.246556 2113859 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-491054 --name no-preload-491054 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-491054 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-491054 --network no-preload-491054 --ip 192.168.76.2 --volume no-preload-491054:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:32:55.548149 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:32:55.548173 2113859 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.286961516s
	I1222 01:32:55.548187 2113859 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:32:55.637916 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:32:55.637949 2113859 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.376987849s
	I1222 01:32:55.637962 2113859 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:32:55.644590 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:32:55.644659 2113859 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.382447056s
	I1222 01:32:55.644686 2113859 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:32:55.645277 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:32:55.645297 2113859 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 1.383313901s
	I1222 01:32:55.645307 2113859 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:32:55.676721 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Running}}
	I1222 01:32:55.706143 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:55.728682 2113859 cache.go:157] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:32:55.732697 2113859 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.47201229s
	I1222 01:32:55.733125 2113859 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:32:55.733195 2113859 cache.go:87] Successfully saved all images to host disk.
	I1222 01:32:55.742044 2113859 cli_runner.go:164] Run: docker exec no-preload-491054 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:32:55.793131 2113859 oci.go:144] the created container "no-preload-491054" has a running status.
	I1222 01:32:55.793157 2113859 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa...
	I1222 01:32:56.150782 2113859 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:32:56.176874 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:56.199648 2113859 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:32:56.199777 2113859 kic_runner.go:114] Args: [docker exec --privileged no-preload-491054 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:32:56.287457 2113859 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:32:56.317221 2113859 machine.go:94] provisionDockerMachine start ...
	I1222 01:32:56.317311 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:56.348674 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:56.349032 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:56.349043 2113859 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:32:56.349725 2113859 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56696->127.0.0.1:38751: read: connection reset by peer
	W1222 01:32:57.461737 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:32:59.954969 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:33:01.958386 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:32:59.497388 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:32:59.497478 2113859 ubuntu.go:182] provisioning hostname "no-preload-491054"
	I1222 01:32:59.497575 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:59.521272 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:59.521594 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:59.521607 2113859 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-491054 && echo "no-preload-491054" | sudo tee /etc/hostname
	I1222 01:32:59.673216 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:32:59.673380 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:32:59.713978 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:32:59.714276 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:32:59.714295 2113859 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491054/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:32:59.877634 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:32:59.877663 2113859 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:32:59.877705 2113859 ubuntu.go:190] setting up certificates
	I1222 01:32:59.877730 2113859 provision.go:84] configureAuth start
	I1222 01:32:59.877825 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:32:59.915162 2113859 provision.go:143] copyHostCerts
	I1222 01:32:59.915223 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:32:59.915233 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:32:59.915300 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:32:59.915379 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:32:59.915385 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:32:59.915410 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:32:59.915457 2113859 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:32:59.915462 2113859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:32:59.915485 2113859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:32:59.915527 2113859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.no-preload-491054 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-491054]
	I1222 01:33:00.019297 2113859 provision.go:177] copyRemoteCerts
	I1222 01:33:00.019469 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:33:00.019545 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.043873 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:00.239960 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:33:00.315095 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:33:00.352718 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:33:00.383851 2113859 provision.go:87] duration metric: took 506.093893ms to configureAuth
	I1222 01:33:00.383879 2113859 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:33:00.384086 2113859 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:33:00.384192 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.409890 2113859 main.go:144] libmachine: Using SSH client type: native
	I1222 01:33:00.410227 2113859 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38751 <nil> <nil>}
	I1222 01:33:00.410243 2113859 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:33:00.748942 2113859 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:33:00.748964 2113859 machine.go:97] duration metric: took 4.431724259s to provisionDockerMachine
	I1222 01:33:00.748975 2113859 client.go:176] duration metric: took 6.457773322s to LocalClient.Create
	I1222 01:33:00.748997 2113859 start.go:167] duration metric: took 6.457839838s to libmachine.API.Create "no-preload-491054"
	I1222 01:33:00.749006 2113859 start.go:293] postStartSetup for "no-preload-491054" (driver="docker")
	I1222 01:33:00.749021 2113859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:33:00.749096 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:33:00.749155 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.768562 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:00.873634 2113859 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:33:00.877828 2113859 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:33:00.877854 2113859 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:33:00.877865 2113859 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:33:00.877922 2113859 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:33:00.878011 2113859 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:33:00.878120 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:33:00.885734 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:33:00.904977 2113859 start.go:296] duration metric: took 155.94443ms for postStartSetup
	I1222 01:33:00.905414 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:33:00.926393 2113859 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:33:00.926680 2113859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:33:00.926736 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:00.943814 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.043968 2113859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:33:01.049606 2113859 start.go:128] duration metric: took 6.762264752s to createHost
	I1222 01:33:01.049630 2113859 start.go:83] releasing machines lock for "no-preload-491054", held for 6.762389851s
	I1222 01:33:01.049702 2113859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:33:01.074676 2113859 ssh_runner.go:195] Run: cat /version.json
	I1222 01:33:01.074732 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:01.074954 2113859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:33:01.075004 2113859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:33:01.118316 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.119194 2113859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38751 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:33:01.358638 2113859 ssh_runner.go:195] Run: systemctl --version
	I1222 01:33:01.368096 2113859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:33:01.437466 2113859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:33:01.443994 2113859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:33:01.444068 2113859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:33:01.492466 2113859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:33:01.492540 2113859 start.go:496] detecting cgroup driver to use...
	I1222 01:33:01.492587 2113859 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:33:01.492677 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:33:01.520160 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:33:01.553116 2113859 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:33:01.553278 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:33:01.578847 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:33:01.600093 2113859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:33:01.775882 2113859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:33:01.964970 2113859 docker.go:234] disabling docker service ...
	I1222 01:33:01.965089 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:33:01.996139 2113859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:33:02.014677 2113859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:33:02.197851 2113859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:33:02.368250 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:33:02.384327 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:33:02.403515 2113859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:33:02.403650 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.415198 2113859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:33:02.415321 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.425372 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.434744 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.444077 2113859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:33:02.452445 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.461779 2113859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.476543 2113859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:02.486290 2113859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:33:02.494924 2113859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:33:02.503372 2113859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:33:02.670149 2113859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:33:03.183820 2113859 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:33:03.183944 2113859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:33:03.191011 2113859 start.go:564] Will wait 60s for crictl version
	I1222 01:33:03.191125 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.195180 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:33:03.239592 2113859 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:33:03.239750 2113859 ssh_runner.go:195] Run: crio --version
	I1222 01:33:03.276484 2113859 ssh_runner.go:195] Run: crio --version
	I1222 01:33:03.325695 2113859 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:33:03.326970 2113859 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:33:03.347240 2113859 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:33:03.351832 2113859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:33:03.363978 2113859 kubeadm.go:884] updating cluster {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:33:03.364135 2113859 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:33:03.364193 2113859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:33:03.396066 2113859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1222 01:33:03.396096 2113859 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1222 01:33:03.396162 2113859 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:03.396375 2113859 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.396475 2113859 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.396559 2113859 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.396777 2113859 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.396934 2113859 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.397121 2113859 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.397230 2113859 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.400639 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.400925 2113859 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:03.401252 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.401482 2113859 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.401640 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.401776 2113859 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.401933 2113859 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.402175 2113859 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.651035 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1222 01:33:03.670795 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.672155 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.708386 2113859 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1222 01:33:03.708477 2113859 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1222 01:33:03.708554 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.737217 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.762779 2113859 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1222 01:33:03.762819 2113859 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.762930 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.763277 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.779160 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.853247 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:03.908757 2113859 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1222 01:33:03.908926 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:03.909017 2113859 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1222 01:33:03.909052 2113859 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:03.909090 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.908801 2113859 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:03.909157 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.949191 2113859 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1222 01:33:03.949417 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:03.949484 2113859 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:03.949532 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:03.979058 2113859 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1222 01:33:03.979103 2113859 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:03.979172 2113859 ssh_runner.go:195] Run: which crictl
	W1222 01:33:04.457206 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:33:06.967402 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:33:04.065704 2113859 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1222 01:33:04.065852 2113859 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.065919 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.066007 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:04.066112 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:04.066183 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.066412 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.066289 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:04.066318 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.232150 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.232281 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.232367 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.232456 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:33:04.232537 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.232627 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.232706 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:33:04.432423 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:33:04.432541 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:33:04.432706 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.432778 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1222 01:33:04.432869 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.432971 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:33:04.433075 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:33:04.433133 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:33:04.433231 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:04.577867 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.577952 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1222 01:33:04.578063 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:33:04.578171 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:04.578298 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:33:04.578396 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1222 01:33:04.578441 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1222 01:33:04.578533 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:04.578614 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:04.578711 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:04.578818 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:04.578938 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:33:04.579022 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:04.685252 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.685290 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1222 01:33:04.685411 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1222 01:33:04.685432 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1222 01:33:04.685507 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:04.685627 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:04.685705 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.685724 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1222 01:33:04.685827 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1222 01:33:04.685868 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	W1222 01:33:04.721125 2113859 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1222 01:33:04.721420 2113859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:04.764232 2113859 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.764322 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1222 01:33:04.764378 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1222 01:33:04.764399 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1222 01:33:05.036404 2113859 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1222 01:33:05.036893 2113859 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:05.036972 2113859 ssh_runner.go:195] Run: which crictl
	I1222 01:33:05.413289 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1222 01:33:05.413338 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:05.413379 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:33:05.413450 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:07.584416 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (2.171016792s)
	I1222 01:33:07.584440 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1222 01:33:07.584457 2113859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:07.584508 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:33:07.584579 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.171103181s)
	I1222 01:33:07.584616 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:33:08.872326 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.287797891s)
	I1222 01:33:08.872352 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1222 01:33:08.872368 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:08.872423 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:33:08.872487 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.287861782s)
	I1222 01:33:08.872519 2113859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1222 01:33:09.454517 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:33:11.454710 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:33:10.300919 2113859 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.428379011s)
	I1222 01:33:10.300966 2113859 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1222 01:33:10.301066 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:10.301154 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.428721212s)
	I1222 01:33:10.301167 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1222 01:33:10.301181 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:10.301207 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:33:11.471273 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.17003853s)
	I1222 01:33:11.471300 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1222 01:33:11.471326 2113859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:11.471376 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:33:11.471466 2113859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.170385514s)
	I1222 01:33:11.471503 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1222 01:33:11.471522 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1222 01:33:13.518323 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (2.046913435s)
	I1222 01:33:13.518351 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1222 01:33:13.518369 2113859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:33:13.518418 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	W1222 01:33:13.454974 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	W1222 01:33:15.455565 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:33:15.056778 2113859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.538332463s)
	I1222 01:33:15.056802 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1222 01:33:15.056819 2113859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:15.056877 2113859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:33:15.669239 2113859 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1222 01:33:15.669279 2113859 cache_images.go:125] Successfully loaded all cached images
	I1222 01:33:15.669285 2113859 cache_images.go:94] duration metric: took 12.273177254s to LoadCachedImages
	I1222 01:33:15.669298 2113859 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:33:15.669428 2113859 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:33:15.669521 2113859 ssh_runner.go:195] Run: crio config
	I1222 01:33:15.766267 2113859 cni.go:84] Creating CNI manager for ""
	I1222 01:33:15.766317 2113859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:33:15.766335 2113859 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:33:15.766360 2113859 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491054 NodeName:no-preload-491054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:33:15.766489 2113859 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491054"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:33:15.766568 2113859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:33:15.775000 2113859 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1222 01:33:15.775076 2113859 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:33:15.783792 2113859 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1222 01:33:15.783912 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1222 01:33:15.784829 2113859 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet
	I1222 01:33:15.785387 2113859 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm
	I1222 01:33:15.789407 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1222 01:33:15.789452 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1222 01:33:16.870782 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:33:16.893005 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1222 01:33:16.897126 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1222 01:33:16.897163 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1222 01:33:17.155180 2113859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1222 01:33:17.172327 2113859 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1222 01:33:17.172369 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1222 01:33:17.633126 2113859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:33:17.641170 2113859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:33:17.657290 2113859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:33:17.672129 2113859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 01:33:17.686042 2113859 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:33:17.690564 2113859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:33:17.701363 2113859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:33:17.830376 2113859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:33:17.851150 2113859 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054 for IP: 192.168.76.2
	I1222 01:33:17.851171 2113859 certs.go:195] generating shared ca certs ...
	I1222 01:33:17.851188 2113859 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.851322 2113859 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:33:17.851370 2113859 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:33:17.851382 2113859 certs.go:257] generating profile certs ...
	I1222 01:33:17.851434 2113859 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key
	I1222 01:33:17.851451 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt with IP's: []
	I1222 01:33:17.979141 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt ...
	I1222 01:33:17.979169 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: {Name:mkcc53105d7ba3a27d1d6f12bb0c9eed295215c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.979387 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key ...
	I1222 01:33:17.979402 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key: {Name:mke07de840627684a57c31550e8378dc9e95cd66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:17.979495 2113859 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a
	I1222 01:33:17.979512 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:33:18.564226 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a ...
	I1222 01:33:18.564256 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a: {Name:mkd8e629f62eab359f7658bb5f9017eca71fb865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.564456 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a ...
	I1222 01:33:18.564471 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a: {Name:mke0f2bc960eb03fd7ec564dbd0be938a1e53d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.564555 2113859 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt.aef9806a -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt
	I1222 01:33:18.564630 2113859 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key
	I1222 01:33:18.564692 2113859 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key
	I1222 01:33:18.564710 2113859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt with IP's: []
	I1222 01:33:18.841279 2113859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt ...
	I1222 01:33:18.841307 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt: {Name:mkccabd6e6a4f8a64f07dedbf98fb912005ff0f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.841487 2113859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key ...
	I1222 01:33:18.841501 2113859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key: {Name:mk865b3aaa0336b2f3bf58a8de85bdb9588e55ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:18.841685 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:33:18.841733 2113859 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:33:18.841746 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:33:18.841773 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:33:18.841800 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:33:18.841864 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:33:18.841917 2113859 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:33:18.842483 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:33:18.860286 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:33:18.878323 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:33:18.900560 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:33:18.920760 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:33:18.940517 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:33:18.962227 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:33:18.980595 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:33:19.000729 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:33:19.021971 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:33:19.040728 2113859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:33:19.058835 2113859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:33:19.072001 2113859 ssh_runner.go:195] Run: openssl version
	I1222 01:33:19.079871 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.094194 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:33:19.108368 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.112634 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.112704 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:19.156081 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:33:19.163807 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:33:19.171280 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.178589 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:33:19.186351 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.190135 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.190200 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:33:19.231471 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:33:19.239260 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:33:19.246949 2113859 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.254373 2113859 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:33:19.262303 2113859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.266149 2113859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.266256 2113859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:33:19.308645 2113859 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:33:19.316489 2113859 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:33:19.324189 2113859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:33:19.327847 2113859 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:33:19.327926 2113859 kubeadm.go:401] StartCluster: {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:33:19.328012 2113859 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:33:19.328077 2113859 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:33:19.367450 2113859 cri.go:96] found id: ""
	I1222 01:33:19.367568 2113859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:33:19.379376 2113859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:33:19.390640 2113859 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:33:19.390726 2113859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:33:19.407000 2113859 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:33:19.407021 2113859 kubeadm.go:158] found existing configuration files:
	
	I1222 01:33:19.407093 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:33:19.414853 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:33:19.414947 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:33:19.423608 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:33:19.431372 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:33:19.431496 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:33:19.439057 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:33:19.447157 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:33:19.447221 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:33:19.455751 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:33:19.463261 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:33:19.463374 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:33:19.470637 2113859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:33:19.512148 2113859 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:33:19.512520 2113859 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:33:19.593167 2113859 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:33:19.593327 2113859 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:33:19.593393 2113859 kubeadm.go:319] OS: Linux
	I1222 01:33:19.593474 2113859 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:33:19.593555 2113859 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:33:19.593644 2113859 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:33:19.593721 2113859 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:33:19.593798 2113859 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:33:19.593869 2113859 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:33:19.593949 2113859 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:33:19.594020 2113859 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:33:19.594096 2113859 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:33:19.662779 2113859 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:33:19.662894 2113859 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:33:19.662988 2113859 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:33:19.682584 2113859 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1222 01:33:17.958734 2110678 pod_ready.go:104] pod "coredns-5dd5756b68-6hgk8" is not "Ready", error: <nil>
	I1222 01:33:19.954525 2110678 pod_ready.go:94] pod "coredns-5dd5756b68-6hgk8" is "Ready"
	I1222 01:33:19.954559 2110678 pod_ready.go:86] duration metric: took 35.506360771s for pod "coredns-5dd5756b68-6hgk8" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.957669 2110678 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.962500 2110678 pod_ready.go:94] pod "etcd-old-k8s-version-535197" is "Ready"
	I1222 01:33:19.962529 2110678 pod_ready.go:86] duration metric: took 4.830952ms for pod "etcd-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.965352 2110678 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.969832 2110678 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-535197" is "Ready"
	I1222 01:33:19.969862 2110678 pod_ready.go:86] duration metric: took 4.484041ms for pod "kube-apiserver-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:19.972422 2110678 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:20.156732 2110678 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-535197" is "Ready"
	I1222 01:33:20.156816 2110678 pod_ready.go:86] duration metric: took 184.367283ms for pod "kube-controller-manager-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:20.353364 2110678 pod_ready.go:83] waiting for pod "kube-proxy-lfhlv" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:20.752210 2110678 pod_ready.go:94] pod "kube-proxy-lfhlv" is "Ready"
	I1222 01:33:20.752242 2110678 pod_ready.go:86] duration metric: took 398.849907ms for pod "kube-proxy-lfhlv" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:20.952052 2110678 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:21.352137 2110678 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-535197" is "Ready"
	I1222 01:33:21.352170 2110678 pod_ready.go:86] duration metric: took 400.085029ms for pod "kube-scheduler-old-k8s-version-535197" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:33:21.352184 2110678 pod_ready.go:40] duration metric: took 36.910285031s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:33:21.443683 2110678 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1222 01:33:21.446881 2110678 out.go:203] 
	W1222 01:33:21.449856 2110678 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1222 01:33:21.452799 2110678 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1222 01:33:21.458527 2110678 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-535197" cluster and "default" namespace by default
	I1222 01:33:19.688410 2113859 out.go:252]   - Generating certificates and keys ...
	I1222 01:33:19.688524 2113859 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:33:19.688595 2113859 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:33:20.151895 2113859 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:33:20.401703 2113859 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:33:21.005040 2113859 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:33:21.140685 2113859 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:33:21.824673 2113859 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:33:21.825252 2113859 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:33:22.020671 2113859 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:33:22.020933 2113859 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:33:22.474306 2113859 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:33:22.884092 2113859 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:33:23.034492 2113859 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:33:23.034774 2113859 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:33:23.075352 2113859 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:33:23.135102 2113859 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:33:23.251153 2113859 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:33:23.986932 2113859 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:33:24.273870 2113859 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:33:24.274658 2113859 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:33:24.277328 2113859 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:33:24.281021 2113859 out.go:252]   - Booting up control plane ...
	I1222 01:33:24.281119 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:33:24.281198 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:33:24.281264 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:33:24.298176 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:33:24.298295 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:33:24.306342 2113859 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:33:24.306969 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:33:24.307038 2113859 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:33:24.444902 2113859 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:33:24.445025 2113859 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.840463265Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b43f29d-8f77-4db5-b7e3-29e3aad4e404 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.841672295Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9c660f8e-0a1b-4aab-bd7a-5d4f7825c03a name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.843451278Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz/dashboard-metrics-scraper" id=ecd706ca-ec69-4914-ad93-aa4200b33d11 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.84356937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.861743557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.86385688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.927997333Z" level=info msg="Created container 213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz/dashboard-metrics-scraper" id=ecd706ca-ec69-4914-ad93-aa4200b33d11 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.930953346Z" level=info msg="Starting container: 213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e" id=1060593a-a47b-4488-89f2-37d632af1a52 name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:33:17 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:17.934701114Z" level=info msg="Started container" PID=1642 containerID=213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz/dashboard-metrics-scraper id=1060593a-a47b-4488-89f2-37d632af1a52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=60dc2140434a0c650ba2f4206eb54015ea94958398d65f57054faa6ffe4e60d0
	Dec 22 01:33:17 old-k8s-version-535197 conmon[1640]: conmon 213d935ce1b4c5a81f8b <ninfo>: container 1642 exited with status 1
	Dec 22 01:33:18 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:18.085237223Z" level=info msg="Removing container: dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7" id=0cb652c8-0ebc-41ec-a276-451a733f0b42 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:33:18 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:18.099900616Z" level=info msg="Error loading conmon cgroup of container dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7: cgroup deleted" id=0cb652c8-0ebc-41ec-a276-451a733f0b42 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:33:18 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:18.104224752Z" level=info msg="Removed container dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz/dashboard-metrics-scraper" id=0cb652c8-0ebc-41ec-a276-451a733f0b42 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.806305077Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.84106561Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.841281365Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.841397571Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.849431256Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.849598989Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.849688955Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.865527869Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.865704775Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.865785076Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.874901535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:33:21 old-k8s-version-535197 crio[654]: time="2025-12-22T01:33:21.875059331Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	213d935ce1b4c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   60dc2140434a0       dashboard-metrics-scraper-5f989dc9cf-sc9fz       kubernetes-dashboard
	2cdb0c07e9744       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   d94ba303a338e       storage-provisioner                              kube-system
	53cf842660721       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago       Running             kubernetes-dashboard        0                   ffed5d7f89a72       kubernetes-dashboard-8694d4445c-bhzmc            kubernetes-dashboard
	b58fe534665d4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   cfe1c6442f026       coredns-5dd5756b68-6hgk8                         kube-system
	b2b297128099a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   853cc164dc835       kube-proxy-lfhlv                                 kube-system
	5e1d3bdf14e11       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   6ffb08bea5224       busybox                                          default
	336bb7045b1cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   d94ba303a338e       storage-provisioner                              kube-system
	e20ee4063c483       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           56 seconds ago       Running             kindnet-cni                 1                   95fcb69c1861a       kindnet-bf7tv                                    kube-system
	71f0891781692       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   8cf8be051cd0e       kube-controller-manager-old-k8s-version-535197   kube-system
	58d08a0c57716       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   bd10dfc5e5e50       kube-apiserver-old-k8s-version-535197            kube-system
	ee1dc9c8ebfd9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   ee49aec7d8932       etcd-old-k8s-version-535197                      kube-system
	6458377dd96fa       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   8cb2d5768fb29       kube-scheduler-old-k8s-version-535197            kube-system
	
	
	==> coredns [b58fe534665d48104bc15d15f4c6f0ece29c6665e8419ee2a5bdd0272b03c474] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41875 - 18211 "HINFO IN 1988241562069470050.4449834520038572590. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00570071s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-535197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-535197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=old-k8s-version-535197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_31_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:31:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-535197
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:33:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:33:10 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:33:10 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:33:10 +0000   Mon, 22 Dec 2025 01:31:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:33:10 +0000   Mon, 22 Dec 2025 01:31:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-535197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                7cd106b7-7f02-4dbd-b127-b7707e12d39f
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-6hgk8                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-old-k8s-version-535197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-bf7tv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-535197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-535197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-lfhlv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-535197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-sc9fz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bhzmc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s               kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s               kubelet          Node old-k8s-version-535197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s               kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node old-k8s-version-535197 event: Registered Node old-k8s-version-535197 in Controller
	  Normal  NodeReady                99s                kubelet          Node old-k8s-version-535197 status is now: NodeReady
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node old-k8s-version-535197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node old-k8s-version-535197 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-535197 event: Registered Node old-k8s-version-535197 in Controller
	
	
	==> dmesg <==
	[Dec22 01:00] overlayfs: idmapped layers are currently not supported
	[Dec22 01:04] overlayfs: idmapped layers are currently not supported
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ee1dc9c8ebfd92a5a6c114d952ca8de817933c150946b2636677ec572aa63702] <==
	{"level":"info","ts":"2025-12-22T01:32:35.842667Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-22T01:32:35.842675Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-22T01:32:35.842986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-22T01:32:35.843042Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-22T01:32:35.843118Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-22T01:32:35.843142Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-22T01:32:35.881868Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-22T01:32:35.882117Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-22T01:32:35.882169Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-22T01:32:35.882244Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-22T01:32:35.882275Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-22T01:32:37.421212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-22T01:32:37.421259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-22T01:32:37.421287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-22T01:32:37.4213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-22T01:32:37.421307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-22T01:32:37.421317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-22T01:32:37.421354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-22T01:32:37.429525Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-535197 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-22T01:32:37.429657Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-22T01:32:37.430649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-22T01:32:37.431312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-22T01:32:37.432198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-22T01:32:37.439444Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-22T01:32:37.439529Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:33:38 up 1 day,  8:16,  0 user,  load average: 3.82, 3.20, 2.49
	Linux old-k8s-version-535197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e20ee4063c483434ea2065bdae477712ff0bd65c9cc1905a7305884c3a1d5a61] <==
	I1222 01:32:41.539289       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:32:41.539472       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:32:41.539591       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:32:41.539602       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:32:41.539614       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:32:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:32:41.803203       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:32:41.824547       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:32:41.824675       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:32:41.825659       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1222 01:33:11.804062       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1222 01:33:11.825691       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1222 01:33:11.825904       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1222 01:33:11.826077       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1222 01:33:13.025684       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:33:13.025780       1 metrics.go:72] Registering metrics
	I1222 01:33:13.025882       1 controller.go:711] "Syncing nftables rules"
	I1222 01:33:21.805134       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:33:21.805173       1 main.go:301] handling current node
	I1222 01:33:31.803431       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:33:31.803460       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58d08a0c5771635b028dd68e96fe171c409dd75d3491d6f68517244657c139ff] <==
	I1222 01:32:40.283417       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1222 01:32:40.283648       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1222 01:32:40.285285       1 aggregator.go:166] initial CRD sync complete...
	I1222 01:32:40.285378       1 autoregister_controller.go:141] Starting autoregister controller
	I1222 01:32:40.285408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1222 01:32:40.285438       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:32:40.286520       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1222 01:32:40.304228       1 shared_informer.go:318] Caches are synced for configmaps
	I1222 01:32:40.304448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1222 01:32:40.344674       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1222 01:32:40.344706       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1222 01:32:40.348891       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:32:40.353421       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1222 01:32:40.418574       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1222 01:32:40.719639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:32:43.884513       1 controller.go:624] quota admission added evaluator for: namespaces
	I1222 01:32:43.957152       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1222 01:32:44.001488       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:32:44.032320       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:32:44.127356       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1222 01:32:44.331952       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.97.199"}
	I1222 01:32:44.354253       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.226.17"}
	I1222 01:32:53.306010       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1222 01:32:53.556909       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:32:53.734405       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [71f0891781692fab9a2c32b04d877bfefe34482d11319ba2640e98531c0dc00a] <==
	I1222 01:32:53.394789       1 shared_informer.go:318] Caches are synced for resource quota
	I1222 01:32:53.685648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="457.072187ms"
	I1222 01:32:53.690656       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.799µs"
	I1222 01:32:53.703402       1 shared_informer.go:318] Caches are synced for garbage collector
	I1222 01:32:53.703521       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1222 01:32:53.705221       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-bhzmc"
	I1222 01:32:53.710505       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-sc9fz"
	I1222 01:32:53.754153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="438.355903ms"
	I1222 01:32:53.755779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="427.703623ms"
	I1222 01:32:53.786267       1 shared_informer.go:318] Caches are synced for garbage collector
	I1222 01:32:53.787454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="33.151071ms"
	I1222 01:32:53.787624       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.129µs"
	I1222 01:32:53.801597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.428789ms"
	I1222 01:32:53.801765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.982µs"
	I1222 01:32:53.802067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.012µs"
	I1222 01:32:53.886554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.692µs"
	I1222 01:33:01.028487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.851µs"
	I1222 01:33:02.054672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.381µs"
	I1222 01:33:03.048760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.895µs"
	I1222 01:33:08.082706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.882936ms"
	I1222 01:33:08.082901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.313µs"
	I1222 01:33:19.105782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.293µs"
	I1222 01:33:19.769052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.593583ms"
	I1222 01:33:19.770194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.673µs"
	I1222 01:33:25.865791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.99µs"
	
	
	==> kube-proxy [b2b297128099a31b08568ff2228dd1cd7119453d1eba86c528fe063e19bda8c0] <==
	I1222 01:32:42.170704       1 server_others.go:69] "Using iptables proxy"
	I1222 01:32:42.258998       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1222 01:32:42.442907       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:32:42.450591       1 server_others.go:152] "Using iptables Proxier"
	I1222 01:32:42.450691       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1222 01:32:42.450723       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1222 01:32:42.450771       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1222 01:32:42.451017       1 server.go:846] "Version info" version="v1.28.0"
	I1222 01:32:42.451275       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:32:42.451992       1 config.go:188] "Starting service config controller"
	I1222 01:32:42.452071       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1222 01:32:42.452117       1 config.go:97] "Starting endpoint slice config controller"
	I1222 01:32:42.452164       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1222 01:32:42.452653       1 config.go:315] "Starting node config controller"
	I1222 01:32:42.452713       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1222 01:32:42.553133       1 shared_informer.go:318] Caches are synced for node config
	I1222 01:32:42.553175       1 shared_informer.go:318] Caches are synced for service config
	I1222 01:32:42.553202       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6458377dd96fa4bd80200d2466d86226b03550784e2c540f23e3138350d286c4] <==
	I1222 01:32:38.049983       1 serving.go:348] Generated self-signed cert in-memory
	W1222 01:32:40.104843       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1222 01:32:40.104967       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1222 01:32:40.105003       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1222 01:32:40.109432       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1222 01:32:40.210405       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1222 01:32:40.210449       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:32:40.212263       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1222 01:32:40.212376       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:32:40.212397       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1222 01:32:40.212416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1222 01:32:40.274001       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1222 01:32:40.274120       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1222 01:32:40.298187       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1222 01:32:40.298225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1222 01:32:40.298415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1222 01:32:40.298429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	I1222 01:32:41.212711       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.962137     783 projected.go:198] Error preparing data for projected volume kube-api-access-8k56c for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz: failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.962284     783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/16d0eb14-b3f7-4ba4-9855-8ffd99e83905-kube-api-access-8k56c podName:16d0eb14-b3f7-4ba4-9855-8ffd99e83905 nodeName:}" failed. No retries permitted until 2025-12-22 01:32:55.46225276 +0000 UTC m=+20.819772161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8k56c" (UniqueName: "kubernetes.io/projected/16d0eb14-b3f7-4ba4-9855-8ffd99e83905-kube-api-access-8k56c") pod "dashboard-metrics-scraper-5f989dc9cf-sc9fz" (UID: "16d0eb14-b3f7-4ba4-9855-8ffd99e83905") : failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.961684     783 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.962460     783 projected.go:198] Error preparing data for projected volume kube-api-access-z7s4v for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bhzmc: failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:54 old-k8s-version-535197 kubelet[783]: E1222 01:32:54.962547     783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/143daf7b-4993-46cb-bc29-44e6d8f97352-kube-api-access-z7s4v podName:143daf7b-4993-46cb-bc29-44e6d8f97352 nodeName:}" failed. No retries permitted until 2025-12-22 01:32:55.462534606 +0000 UTC m=+20.820054007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z7s4v" (UniqueName: "kubernetes.io/projected/143daf7b-4993-46cb-bc29-44e6d8f97352-kube-api-access-z7s4v") pod "kubernetes-dashboard-8694d4445c-bhzmc" (UID: "143daf7b-4993-46cb-bc29-44e6d8f97352") : failed to sync configmap cache: timed out waiting for the condition
	Dec 22 01:32:55 old-k8s-version-535197 kubelet[783]: W1222 01:32:55.918775     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d786ec4bcd2c9698c63afb4f4664b7aac71cefc31750d193f9f51b987e2a0ab1/crio-ffed5d7f89a7274cff785c73a9711235d17ba848f48ce8c354c308786538823e WatchSource:0}: Error finding container ffed5d7f89a7274cff785c73a9711235d17ba848f48ce8c354c308786538823e: Status 404 returned error can't find the container with id ffed5d7f89a7274cff785c73a9711235d17ba848f48ce8c354c308786538823e
	Dec 22 01:33:01 old-k8s-version-535197 kubelet[783]: I1222 01:33:01.014583     783 scope.go:117] "RemoveContainer" containerID="e9233c95ff47cd6b1d5213ba2a4d982955a23056a515a23175263074ba9fc037"
	Dec 22 01:33:02 old-k8s-version-535197 kubelet[783]: I1222 01:33:02.021559     783 scope.go:117] "RemoveContainer" containerID="e9233c95ff47cd6b1d5213ba2a4d982955a23056a515a23175263074ba9fc037"
	Dec 22 01:33:02 old-k8s-version-535197 kubelet[783]: I1222 01:33:02.030322     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:02 old-k8s-version-535197 kubelet[783]: E1222 01:33:02.033545     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:03 old-k8s-version-535197 kubelet[783]: I1222 01:33:03.028811     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:03 old-k8s-version-535197 kubelet[783]: E1222 01:33:03.029080     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:05 old-k8s-version-535197 kubelet[783]: I1222 01:33:05.851398     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:05 old-k8s-version-535197 kubelet[783]: E1222 01:33:05.851694     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:08 old-k8s-version-535197 kubelet[783]: I1222 01:33:08.070354     783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bhzmc" podStartSLOduration=3.287552187 podCreationTimestamp="2025-12-22 01:32:53 +0000 UTC" firstStartedPulling="2025-12-22 01:32:55.942058894 +0000 UTC m=+21.299578295" lastFinishedPulling="2025-12-22 01:33:07.724798315 +0000 UTC m=+33.082317715" observedRunningTime="2025-12-22 01:33:08.069702866 +0000 UTC m=+33.427222266" watchObservedRunningTime="2025-12-22 01:33:08.070291607 +0000 UTC m=+33.427811008"
	Dec 22 01:33:12 old-k8s-version-535197 kubelet[783]: I1222 01:33:12.066299     783 scope.go:117] "RemoveContainer" containerID="336bb7045b1cd554f929f862cf83e52a582bfa5bccb3fd3858ad9bbd6ca94f04"
	Dec 22 01:33:17 old-k8s-version-535197 kubelet[783]: I1222 01:33:17.839394     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:18 old-k8s-version-535197 kubelet[783]: I1222 01:33:18.083763     783 scope.go:117] "RemoveContainer" containerID="dce0793a4909760c31123616f5bf08c355c097aa764bb0819aca4d9984c974c7"
	Dec 22 01:33:19 old-k8s-version-535197 kubelet[783]: I1222 01:33:19.088057     783 scope.go:117] "RemoveContainer" containerID="213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e"
	Dec 22 01:33:19 old-k8s-version-535197 kubelet[783]: E1222 01:33:19.088735     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:25 old-k8s-version-535197 kubelet[783]: I1222 01:33:25.851350     783 scope.go:117] "RemoveContainer" containerID="213d935ce1b4c5a81f8b5c15a2cada2c3c9cf4fd35d26e4b859d00d9b6470b0e"
	Dec 22 01:33:25 old-k8s-version-535197 kubelet[783]: E1222 01:33:25.851671     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sc9fz_kubernetes-dashboard(16d0eb14-b3f7-4ba4-9855-8ffd99e83905)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sc9fz" podUID="16d0eb14-b3f7-4ba4-9855-8ffd99e83905"
	Dec 22 01:33:33 old-k8s-version-535197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 22 01:33:33 old-k8s-version-535197 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 22 01:33:33 old-k8s-version-535197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [53cf84266072177d217ef2457215faa2f0d9e68a7d7cb456e860ff12d6878b3c] <==
	2025/12/22 01:33:07 Using namespace: kubernetes-dashboard
	2025/12/22 01:33:07 Using in-cluster config to connect to apiserver
	2025/12/22 01:33:07 Using secret token for csrf signing
	2025/12/22 01:33:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/22 01:33:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/22 01:33:07 Successful initial request to the apiserver, version: v1.28.0
	2025/12/22 01:33:07 Generating JWE encryption key
	2025/12/22 01:33:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/22 01:33:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/22 01:33:08 Initializing JWE encryption key from synchronized object
	2025/12/22 01:33:08 Creating in-cluster Sidecar client
	2025/12/22 01:33:08 Serving insecurely on HTTP port: 9090
	2025/12/22 01:33:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:33:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:33:07 Starting overwatch
	
	
	==> storage-provisioner [2cdb0c07e974428487346498e876bb16c65cfa7e95e5303ed7f92aa135b676a5] <==
	I1222 01:33:12.187029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:33:12.211652       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:33:12.211781       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1222 01:33:29.613654       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:33:29.614106       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cce17c97-ab91-4b3b-aab2-fd0b22a9040e", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-535197_eb9cbd64-7af6-4c74-b0b8-24628253093d became leader
	I1222 01:33:29.614184       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-535197_eb9cbd64-7af6-4c74-b0b8-24628253093d!
	I1222 01:33:29.714611       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-535197_eb9cbd64-7af6-4c74-b0b8-24628253093d!
	
	
	==> storage-provisioner [336bb7045b1cd554f929f862cf83e52a582bfa5bccb3fd3858ad9bbd6ca94f04] <==
	I1222 01:32:41.615135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1222 01:33:11.642172       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-535197 -n old-k8s-version-535197
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-535197 -n old-k8s-version-535197: exit status 2 (343.333506ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-535197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.692278ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:34:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-235567 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-235567 describe deploy/metrics-server -n kube-system: exit status 1 (80.470798ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-235567 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-235567
helpers_test.go:244: (dbg) docker inspect embed-certs-235567:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6",
	        "Created": "2025-12-22T01:33:47.830445272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2118642,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:33:47.892211848Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/hosts",
	        "LogPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6-json.log",
	        "Name": "/embed-certs-235567",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-235567:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-235567",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6",
	                "LowerDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-235567",
	                "Source": "/var/lib/docker/volumes/embed-certs-235567/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-235567",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-235567",
	                "name.minikube.sigs.k8s.io": "embed-certs-235567",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba6336508255fa4e8a1c336aca52e96efb096a2c2cd91f0f40a35f1770db3030",
	            "SandboxKey": "/var/run/docker/netns/ba6336508255",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38756"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38757"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38760"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38758"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38759"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-235567": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:dc:b8:e6:7a:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6f3306e4c44399ff80f026f59f3fa2f6e6e98178df3c00a9bdf130d8ed5d4e77",
	                    "EndpointID": "31b40020c29dc9f1569a548bd1970ea65c17e6408e27579bed1ee048737adff9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-235567",
	                        "4081720ccc5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-235567 -n embed-certs-235567
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-235567 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-235567 logs -n 25: (1.320552481s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ stop    │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p NoKubernetes-346588 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ ssh     │ -p NoKubernetes-346588 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │                     │
	│ delete  │ -p NoKubernetes-346588                                                                                                                                                                                                                        │ NoKubernetes-346588       │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:29 UTC │
	│ start   │ -p force-systemd-flag-314223 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:29 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ force-systemd-flag-314223 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p force-systemd-flag-314223                                                                                                                                                                                                                  │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ start   │ -p cert-options-433911 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ cert-options-433911 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ -p cert-options-433911 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p cert-options-433911                                                                                                                                                                                                                        │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:31 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:31 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ stop    │ -p old-k8s-version-535197 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-535197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ delete  │ -p cert-expiration-031738                                                                                                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-491054         │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                               │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:33:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:33:42.566393 2118210 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:33:42.566580 2118210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:33:42.566610 2118210 out.go:374] Setting ErrFile to fd 2...
	I1222 01:33:42.566631 2118210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:33:42.566999 2118210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:33:42.568055 2118210 out.go:368] Setting JSON to false
	I1222 01:33:42.569020 2118210 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116172,"bootTime":1766251051,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:33:42.569122 2118210 start.go:143] virtualization:  
	I1222 01:33:42.572126 2118210 out.go:179] * [embed-certs-235567] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:33:42.573532 2118210 notify.go:221] Checking for updates...
	I1222 01:33:42.574337 2118210 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:33:42.576184 2118210 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:33:42.578407 2118210 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:33:42.579656 2118210 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:33:42.580898 2118210 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:33:42.582080 2118210 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:33:42.583911 2118210 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:33:42.584000 2118210 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:33:42.606096 2118210 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:33:42.606230 2118210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:33:42.674728 2118210 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:33:42.665050913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:33:42.674835 2118210 docker.go:319] overlay module found
	I1222 01:33:42.677418 2118210 out.go:179] * Using the docker driver based on user configuration
	I1222 01:33:42.678843 2118210 start.go:309] selected driver: docker
	I1222 01:33:42.678863 2118210 start.go:928] validating driver "docker" against <nil>
	I1222 01:33:42.678877 2118210 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:33:42.679583 2118210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:33:42.733953 2118210 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:33:42.725240584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:33:42.734126 2118210 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:33:42.734347 2118210 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:33:42.736421 2118210 out.go:179] * Using Docker driver with root privileges
	I1222 01:33:42.738106 2118210 cni.go:84] Creating CNI manager for ""
	I1222 01:33:42.738194 2118210 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:33:42.738208 2118210 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:33:42.738335 2118210 start.go:353] cluster config:
	{Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:33:42.740546 2118210 out.go:179] * Starting "embed-certs-235567" primary control-plane node in "embed-certs-235567" cluster
	I1222 01:33:42.742811 2118210 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:33:42.745051 2118210 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:33:42.747377 2118210 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:33:42.747424 2118210 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:33:42.747437 2118210 cache.go:65] Caching tarball of preloaded images
	I1222 01:33:42.747476 2118210 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:33:42.747517 2118210 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:33:42.747538 2118210 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:33:42.747638 2118210 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/config.json ...
	I1222 01:33:42.747654 2118210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/config.json: {Name:mk9a2f2461b5fc730ee2c795c0f8e3786654ca9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:42.766550 2118210 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:33:42.766574 2118210 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:33:42.766593 2118210 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:33:42.766626 2118210 start.go:360] acquireMachinesLock for embed-certs-235567: {Name:mk48f2965e632f2a10179742c54c3729e95c3cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:33:42.766744 2118210 start.go:364] duration metric: took 97.72µs to acquireMachinesLock for "embed-certs-235567"
	I1222 01:33:42.766774 2118210 start.go:93] Provisioning new machine with config: &{Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:33:42.766848 2118210 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:33:42.769854 2118210 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:33:42.770064 2118210 start.go:159] libmachine.API.Create for "embed-certs-235567" (driver="docker")
	I1222 01:33:42.770110 2118210 client.go:173] LocalClient.Create starting
	I1222 01:33:42.770207 2118210 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:33:42.770255 2118210 main.go:144] libmachine: Decoding PEM data...
	I1222 01:33:42.770274 2118210 main.go:144] libmachine: Parsing certificate...
	I1222 01:33:42.770333 2118210 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:33:42.770354 2118210 main.go:144] libmachine: Decoding PEM data...
	I1222 01:33:42.770368 2118210 main.go:144] libmachine: Parsing certificate...
	I1222 01:33:42.770734 2118210 cli_runner.go:164] Run: docker network inspect embed-certs-235567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:33:42.787531 2118210 cli_runner.go:211] docker network inspect embed-certs-235567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:33:42.787631 2118210 network_create.go:284] running [docker network inspect embed-certs-235567] to gather additional debugging logs...
	I1222 01:33:42.787652 2118210 cli_runner.go:164] Run: docker network inspect embed-certs-235567
	W1222 01:33:42.803692 2118210 cli_runner.go:211] docker network inspect embed-certs-235567 returned with exit code 1
	I1222 01:33:42.803726 2118210 network_create.go:287] error running [docker network inspect embed-certs-235567]: docker network inspect embed-certs-235567: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-235567 not found
	I1222 01:33:42.803738 2118210 network_create.go:289] output of [docker network inspect embed-certs-235567]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-235567 not found
	
	** /stderr **
	I1222 01:33:42.803843 2118210 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:33:42.821074 2118210 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:33:42.821490 2118210 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:33:42.821851 2118210 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:33:42.822185 2118210 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 01:33:42.822692 2118210 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a58050}
	I1222 01:33:42.822715 2118210 network_create.go:124] attempt to create docker network embed-certs-235567 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:33:42.822778 2118210 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-235567 embed-certs-235567
	I1222 01:33:42.888474 2118210 network_create.go:108] docker network embed-certs-235567 192.168.85.0/24 created
	I1222 01:33:42.888514 2118210 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-235567" container
	I1222 01:33:42.888603 2118210 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:33:42.904956 2118210 cli_runner.go:164] Run: docker volume create embed-certs-235567 --label name.minikube.sigs.k8s.io=embed-certs-235567 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:33:42.922918 2118210 oci.go:103] Successfully created a docker volume embed-certs-235567
	I1222 01:33:42.923013 2118210 cli_runner.go:164] Run: docker run --rm --name embed-certs-235567-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-235567 --entrypoint /usr/bin/test -v embed-certs-235567:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:33:43.506448 2118210 oci.go:107] Successfully prepared a docker volume embed-certs-235567
	I1222 01:33:43.506524 2118210 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:33:43.506538 2118210 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:33:43.506606 2118210 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-235567:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:33:47.752589 2118210 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-235567:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.245943852s)
	I1222 01:33:47.752638 2118210 kic.go:203] duration metric: took 4.246095815s to extract preloaded images to volume ...
	W1222 01:33:47.752770 2118210 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:33:47.752888 2118210 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:33:47.813872 2118210 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-235567 --name embed-certs-235567 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-235567 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-235567 --network embed-certs-235567 --ip 192.168.85.2 --volume embed-certs-235567:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:33:48.130851 2118210 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Running}}
	I1222 01:33:48.153460 2118210 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:33:48.178882 2118210 cli_runner.go:164] Run: docker exec embed-certs-235567 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:33:48.233591 2118210 oci.go:144] the created container "embed-certs-235567" has a running status.
	I1222 01:33:48.233618 2118210 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa...
	I1222 01:33:48.490198 2118210 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:33:48.510069 2118210 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:33:48.535688 2118210 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:33:48.535707 2118210 kic_runner.go:114] Args: [docker exec --privileged embed-certs-235567 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:33:48.595007 2118210 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:33:48.633637 2118210 machine.go:94] provisionDockerMachine start ...
	I1222 01:33:48.633737 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:48.674371 2118210 main.go:144] libmachine: Using SSH client type: native
	I1222 01:33:48.674706 2118210 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38756 <nil> <nil>}
	I1222 01:33:48.674715 2118210 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:33:48.675370 2118210 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42228->127.0.0.1:38756: read: connection reset by peer
	I1222 01:33:51.808999 2118210 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-235567
	
	I1222 01:33:51.809065 2118210 ubuntu.go:182] provisioning hostname "embed-certs-235567"
	I1222 01:33:51.809137 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:51.826416 2118210 main.go:144] libmachine: Using SSH client type: native
	I1222 01:33:51.826742 2118210 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38756 <nil> <nil>}
	I1222 01:33:51.826761 2118210 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-235567 && echo "embed-certs-235567" | sudo tee /etc/hostname
	I1222 01:33:51.966681 2118210 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-235567
	
	I1222 01:33:51.966776 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:51.984175 2118210 main.go:144] libmachine: Using SSH client type: native
	I1222 01:33:51.984490 2118210 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38756 <nil> <nil>}
	I1222 01:33:51.984514 2118210 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-235567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-235567/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-235567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:33:52.118489 2118210 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:33:52.118516 2118210 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:33:52.118598 2118210 ubuntu.go:190] setting up certificates
	I1222 01:33:52.118633 2118210 provision.go:84] configureAuth start
	I1222 01:33:52.118708 2118210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:33:52.147030 2118210 provision.go:143] copyHostCerts
	I1222 01:33:52.147099 2118210 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:33:52.147112 2118210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:33:52.147188 2118210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:33:52.147286 2118210 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:33:52.147299 2118210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:33:52.147327 2118210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:33:52.147380 2118210 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:33:52.147389 2118210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:33:52.147414 2118210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:33:52.147463 2118210 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.embed-certs-235567 san=[127.0.0.1 192.168.85.2 embed-certs-235567 localhost minikube]
	I1222 01:33:52.483836 2118210 provision.go:177] copyRemoteCerts
	I1222 01:33:52.483935 2118210 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:33:52.483983 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:52.502759 2118210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38756 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:33:52.601565 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:33:52.620475 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:33:52.638689 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1222 01:33:52.657494 2118210 provision.go:87] duration metric: took 538.836856ms to configureAuth
	I1222 01:33:52.657524 2118210 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:33:52.657700 2118210 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:33:52.657821 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:52.676492 2118210 main.go:144] libmachine: Using SSH client type: native
	I1222 01:33:52.676815 2118210 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38756 <nil> <nil>}
	I1222 01:33:52.676829 2118210 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:33:53.068315 2118210 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:33:53.068338 2118210 machine.go:97] duration metric: took 4.434682986s to provisionDockerMachine
	I1222 01:33:53.068349 2118210 client.go:176] duration metric: took 10.298228089s to LocalClient.Create
	I1222 01:33:53.068363 2118210 start.go:167] duration metric: took 10.298299734s to libmachine.API.Create "embed-certs-235567"
	I1222 01:33:53.068370 2118210 start.go:293] postStartSetup for "embed-certs-235567" (driver="docker")
	I1222 01:33:53.068381 2118210 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:33:53.068444 2118210 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:33:53.068489 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:53.086033 2118210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38756 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:33:53.181355 2118210 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:33:53.184644 2118210 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:33:53.184673 2118210 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:33:53.184685 2118210 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:33:53.184741 2118210 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:33:53.184833 2118210 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:33:53.184941 2118210 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:33:53.192209 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:33:53.209038 2118210 start.go:296] duration metric: took 140.652905ms for postStartSetup
	I1222 01:33:53.209476 2118210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:33:53.225823 2118210 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/config.json ...
	I1222 01:33:53.226094 2118210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:33:53.226157 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:53.241987 2118210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38756 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:33:53.334586 2118210 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:33:53.339081 2118210 start.go:128] duration metric: took 10.572217887s to createHost
	I1222 01:33:53.339106 2118210 start.go:83] releasing machines lock for "embed-certs-235567", held for 10.572349312s
	I1222 01:33:53.339181 2118210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:33:53.355824 2118210 ssh_runner.go:195] Run: cat /version.json
	I1222 01:33:53.355849 2118210 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:33:53.355876 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:53.355913 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:33:53.381509 2118210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38756 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:33:53.386960 2118210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38756 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:33:53.571522 2118210 ssh_runner.go:195] Run: systemctl --version
	I1222 01:33:53.577647 2118210 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:33:53.639941 2118210 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:33:53.645828 2118210 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:33:53.645920 2118210 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:33:53.678272 2118210 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:33:53.678296 2118210 start.go:496] detecting cgroup driver to use...
	I1222 01:33:53.678345 2118210 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:33:53.678421 2118210 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:33:53.696015 2118210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:33:53.708853 2118210 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:33:53.708923 2118210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:33:53.727210 2118210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:33:53.746314 2118210 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:33:53.876126 2118210 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:33:54.019479 2118210 docker.go:234] disabling docker service ...
	I1222 01:33:54.019570 2118210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:33:54.043053 2118210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:33:54.056544 2118210 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:33:54.184161 2118210 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:33:54.318485 2118210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:33:54.331356 2118210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:33:54.352097 2118210 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:33:54.352169 2118210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:54.361207 2118210 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:33:54.361276 2118210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:54.370492 2118210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:54.379144 2118210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:54.388446 2118210 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:33:54.396628 2118210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:54.407041 2118210 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:54.420196 2118210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:33:54.429170 2118210 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:33:54.437719 2118210 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:33:54.448031 2118210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:33:54.564493 2118210 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:33:54.732654 2118210 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:33:54.732756 2118210 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:33:54.736525 2118210 start.go:564] Will wait 60s for crictl version
	I1222 01:33:54.736598 2118210 ssh_runner.go:195] Run: which crictl
	I1222 01:33:54.739956 2118210 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:33:54.764575 2118210 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:33:54.764663 2118210 ssh_runner.go:195] Run: crio --version
	I1222 01:33:54.799125 2118210 ssh_runner.go:195] Run: crio --version
	I1222 01:33:54.840779 2118210 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:33:54.843519 2118210 cli_runner.go:164] Run: docker network inspect embed-certs-235567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:33:54.859350 2118210 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:33:54.863307 2118210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:33:54.873058 2118210 kubeadm.go:884] updating cluster {Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:33:54.873166 2118210 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:33:54.873220 2118210 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:33:54.906690 2118210 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:33:54.906711 2118210 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:33:54.906768 2118210 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:33:54.931659 2118210 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:33:54.931683 2118210 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:33:54.931692 2118210 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1222 01:33:54.931792 2118210 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-235567 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:33:54.931887 2118210 ssh_runner.go:195] Run: crio config
	I1222 01:33:54.983863 2118210 cni.go:84] Creating CNI manager for ""
	I1222 01:33:54.983884 2118210 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:33:54.983897 2118210 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:33:54.983930 2118210 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-235567 NodeName:embed-certs-235567 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:33:54.984080 2118210 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-235567"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:33:54.984156 2118210 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:33:54.992698 2118210 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:33:54.992786 2118210 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:33:55.002717 2118210 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1222 01:33:55.018977 2118210 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:33:55.032430 2118210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1222 01:33:55.045602 2118210 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:33:55.049021 2118210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:33:55.058313 2118210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:33:55.197937 2118210 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:33:55.214229 2118210 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567 for IP: 192.168.85.2
	I1222 01:33:55.214253 2118210 certs.go:195] generating shared ca certs ...
	I1222 01:33:55.214268 2118210 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:55.214405 2118210 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:33:55.214452 2118210 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:33:55.214463 2118210 certs.go:257] generating profile certs ...
	I1222 01:33:55.214514 2118210 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/client.key
	I1222 01:33:55.214530 2118210 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/client.crt with IP's: []
	I1222 01:33:55.328611 2118210 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/client.crt ...
	I1222 01:33:55.328645 2118210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/client.crt: {Name:mk6ce239a3d5ab7935187bda718ec90d29c696d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:55.328841 2118210 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/client.key ...
	I1222 01:33:55.328858 2118210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/client.key: {Name:mk958c771b05794d9c1aaef4318a58c93bcf9be9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:55.328948 2118210 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key.859ed20e
	I1222 01:33:55.328965 2118210 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.crt.859ed20e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:33:55.843580 2118210 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.crt.859ed20e ...
	I1222 01:33:55.843607 2118210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.crt.859ed20e: {Name:mk414cad727b7c2d939839de0f28c7f0c9c59dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:55.843775 2118210 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key.859ed20e ...
	I1222 01:33:55.843786 2118210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key.859ed20e: {Name:mk656e474e19dc912fbda9b06acdb0246c151d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:55.843864 2118210 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.crt.859ed20e -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.crt
	I1222 01:33:55.843965 2118210 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key.859ed20e -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key
	I1222 01:33:55.844028 2118210 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.key
	I1222 01:33:55.844041 2118210 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.crt with IP's: []
	I1222 01:33:56.449063 2118210 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.crt ...
	I1222 01:33:56.449093 2118210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.crt: {Name:mka0c209ed6592e1003e35bf8464b342d268d4ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:56.449328 2118210 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.key ...
	I1222 01:33:56.449342 2118210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.key: {Name:mk1ed8f6683cd77841857175a2f4ac396fb222ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:33:56.449556 2118210 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:33:56.449603 2118210 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:33:56.449617 2118210 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:33:56.449644 2118210 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:33:56.449672 2118210 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:33:56.449700 2118210 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:33:56.449750 2118210 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:33:56.450350 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:33:56.468827 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:33:56.485838 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:33:56.503612 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:33:56.521196 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1222 01:33:56.538982 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:33:56.556252 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:33:56.573121 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:33:56.590051 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:33:56.615834 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:33:56.637608 2118210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:33:56.655720 2118210 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:33:56.669851 2118210 ssh_runner.go:195] Run: openssl version
	I1222 01:33:56.677567 2118210 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:33:56.685425 2118210 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:33:56.692734 2118210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:33:56.696566 2118210 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:33:56.696631 2118210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:33:56.738074 2118210 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:33:56.745463 2118210 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:33:56.752605 2118210 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:56.760219 2118210 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:33:56.767553 2118210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:56.771333 2118210 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:56.771393 2118210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:33:56.812299 2118210 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:33:56.819967 2118210 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:33:56.827359 2118210 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:33:56.834998 2118210 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:33:56.842468 2118210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:33:56.847199 2118210 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:33:56.847275 2118210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:33:56.890857 2118210 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:33:56.898114 2118210 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:33:56.904968 2118210 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:33:56.908907 2118210 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:33:56.908955 2118210 kubeadm.go:401] StartCluster: {Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:33:56.909027 2118210 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:33:56.909120 2118210 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:33:56.937057 2118210 cri.go:96] found id: ""
	I1222 01:33:56.937132 2118210 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:33:56.945149 2118210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:33:56.953111 2118210 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:33:56.953176 2118210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:33:56.961410 2118210 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:33:56.961482 2118210 kubeadm.go:158] found existing configuration files:
	
	I1222 01:33:56.961568 2118210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:33:56.969074 2118210 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:33:56.969137 2118210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:33:56.976315 2118210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:33:56.984282 2118210 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:33:56.984380 2118210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:33:56.991965 2118210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:33:56.999842 2118210 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:33:56.999914 2118210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:33:57.013592 2118210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:33:57.022083 2118210 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:33:57.022148 2118210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:33:57.029806 2118210 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:33:57.070806 2118210 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 01:33:57.070868 2118210 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:33:57.096716 2118210 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:33:57.096868 2118210 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:33:57.096939 2118210 kubeadm.go:319] OS: Linux
	I1222 01:33:57.097022 2118210 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:33:57.097113 2118210 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:33:57.097196 2118210 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:33:57.097278 2118210 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:33:57.097402 2118210 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:33:57.097480 2118210 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:33:57.097573 2118210 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:33:57.097671 2118210 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:33:57.097752 2118210 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:33:57.161445 2118210 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:33:57.161581 2118210 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:33:57.161699 2118210 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:33:57.168893 2118210 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:33:57.175580 2118210 out.go:252]   - Generating certificates and keys ...
	I1222 01:33:57.175694 2118210 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:33:57.175788 2118210 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:33:58.060504 2118210 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:33:58.337672 2118210 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:33:58.516990 2118210 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:33:58.959523 2118210 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:33:59.342062 2118210 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:33:59.342225 2118210 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-235567 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:33:59.423540 2118210 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:33:59.423951 2118210 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-235567 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:33:59.880159 2118210 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:34:00.374895 2118210 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:34:00.921449 2118210 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:34:00.921643 2118210 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:34:01.250109 2118210 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:34:01.310796 2118210 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:34:01.420420 2118210 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:34:01.702236 2118210 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:34:02.302543 2118210 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:34:02.303222 2118210 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:34:02.306107 2118210 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:34:02.309856 2118210 out.go:252]   - Booting up control plane ...
	I1222 01:34:02.309956 2118210 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:34:02.310034 2118210 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:34:02.310101 2118210 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:34:02.324997 2118210 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:34:02.325105 2118210 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:34:02.333186 2118210 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:34:02.334214 2118210 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:34:02.335086 2118210 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:34:02.470528 2118210 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:34:02.470648 2118210 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:34:04.971573 2118210 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.501251383s
	I1222 01:34:04.975190 2118210 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 01:34:04.975291 2118210 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1222 01:34:04.975384 2118210 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 01:34:04.975466 2118210 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1222 01:34:10.714182 2118210 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.738254502s
	I1222 01:34:10.824988 2118210 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.849736137s
	I1222 01:34:11.977795 2118210 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002416199s
	I1222 01:34:12.010874 2118210 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 01:34:12.028081 2118210 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 01:34:12.041499 2118210 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 01:34:12.041708 2118210 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-235567 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 01:34:12.054507 2118210 kubeadm.go:319] [bootstrap-token] Using token: go42uk.sdifypo3zq0fhnjy
	I1222 01:34:12.057671 2118210 out.go:252]   - Configuring RBAC rules ...
	I1222 01:34:12.057806 2118210 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 01:34:12.066448 2118210 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 01:34:12.076193 2118210 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 01:34:12.080257 2118210 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 01:34:12.085206 2118210 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 01:34:12.091578 2118210 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 01:34:12.385632 2118210 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 01:34:12.853737 2118210 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 01:34:13.387871 2118210 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 01:34:13.387889 2118210 kubeadm.go:319] 
	I1222 01:34:13.387958 2118210 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 01:34:13.387964 2118210 kubeadm.go:319] 
	I1222 01:34:13.388040 2118210 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 01:34:13.388051 2118210 kubeadm.go:319] 
	I1222 01:34:13.388076 2118210 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 01:34:13.388135 2118210 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 01:34:13.388185 2118210 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 01:34:13.388189 2118210 kubeadm.go:319] 
	I1222 01:34:13.388243 2118210 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 01:34:13.388246 2118210 kubeadm.go:319] 
	I1222 01:34:13.388306 2118210 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 01:34:13.388311 2118210 kubeadm.go:319] 
	I1222 01:34:13.388362 2118210 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 01:34:13.388437 2118210 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 01:34:13.388510 2118210 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 01:34:13.388514 2118210 kubeadm.go:319] 
	I1222 01:34:13.388599 2118210 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 01:34:13.388675 2118210 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 01:34:13.388679 2118210 kubeadm.go:319] 
	I1222 01:34:13.388763 2118210 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token go42uk.sdifypo3zq0fhnjy \
	I1222 01:34:13.388866 2118210 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d \
	I1222 01:34:13.388885 2118210 kubeadm.go:319] 	--control-plane 
	I1222 01:34:13.388889 2118210 kubeadm.go:319] 
	I1222 01:34:13.388973 2118210 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 01:34:13.388977 2118210 kubeadm.go:319] 
	I1222 01:34:13.389059 2118210 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token go42uk.sdifypo3zq0fhnjy \
	I1222 01:34:13.389161 2118210 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d 
	I1222 01:34:13.391686 2118210 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 01:34:13.391921 2118210 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:34:13.392032 2118210 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:34:13.392051 2118210 cni.go:84] Creating CNI manager for ""
	I1222 01:34:13.392062 2118210 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:34:13.397131 2118210 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1222 01:34:13.400014 2118210 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1222 01:34:13.404016 2118210 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1222 01:34:13.404038 2118210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1222 01:34:13.417781 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1222 01:34:13.687686 2118210 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 01:34:13.687809 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:13.687887 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-235567 minikube.k8s.io/updated_at=2025_12_22T01_34_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=embed-certs-235567 minikube.k8s.io/primary=true
	I1222 01:34:13.855631 2118210 ops.go:34] apiserver oom_adj: -16
	I1222 01:34:13.855742 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:14.356465 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:14.856703 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:15.356056 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:15.855956 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:16.356058 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:16.856126 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:17.356462 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:17.855858 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:18.356456 2118210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:34:18.466435 2118210 kubeadm.go:1114] duration metric: took 4.778670546s to wait for elevateKubeSystemPrivileges
	I1222 01:34:18.466465 2118210 kubeadm.go:403] duration metric: took 21.557513251s to StartCluster
	I1222 01:34:18.466482 2118210 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:34:18.466544 2118210 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:34:18.467545 2118210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:34:18.467774 2118210 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:34:18.467886 2118210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 01:34:18.468123 2118210 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:34:18.468172 2118210 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:34:18.468235 2118210 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-235567"
	I1222 01:34:18.468255 2118210 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-235567"
	I1222 01:34:18.468330 2118210 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:34:18.468821 2118210 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:34:18.469270 2118210 addons.go:70] Setting default-storageclass=true in profile "embed-certs-235567"
	I1222 01:34:18.469289 2118210 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-235567"
	I1222 01:34:18.469574 2118210 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:34:18.471829 2118210 out.go:179] * Verifying Kubernetes components...
	I1222 01:34:18.477485 2118210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:34:18.515582 2118210 addons.go:239] Setting addon default-storageclass=true in "embed-certs-235567"
	I1222 01:34:18.515619 2118210 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:34:18.516104 2118210 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:34:18.520491 2118210 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:34:18.523327 2118210 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:34:18.523350 2118210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:34:18.523430 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:34:18.562828 2118210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38756 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:34:18.564148 2118210 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:34:18.564164 2118210 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:34:18.564240 2118210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:34:18.589404 2118210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38756 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:34:18.769334 2118210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 01:34:18.872343 2118210 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:34:18.890494 2118210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:34:18.935242 2118210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:34:19.524108 2118210 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1222 01:34:19.525213 2118210 node_ready.go:35] waiting up to 6m0s for node "embed-certs-235567" to be "Ready" ...
	I1222 01:34:19.850707 2118210 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1222 01:34:19.852553 2118210 addons.go:530] duration metric: took 1.384376772s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1222 01:34:20.032098 2118210 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-235567" context rescaled to 1 replicas
	W1222 01:34:21.529827 2118210 node_ready.go:57] node "embed-certs-235567" has "Ready":"False" status (will retry)
	W1222 01:34:24.029635 2118210 node_ready.go:57] node "embed-certs-235567" has "Ready":"False" status (will retry)
	W1222 01:34:26.030451 2118210 node_ready.go:57] node "embed-certs-235567" has "Ready":"False" status (will retry)
	W1222 01:34:28.529969 2118210 node_ready.go:57] node "embed-certs-235567" has "Ready":"False" status (will retry)
	W1222 01:34:31.029264 2118210 node_ready.go:57] node "embed-certs-235567" has "Ready":"False" status (will retry)
	I1222 01:34:32.029666 2118210 node_ready.go:49] node "embed-certs-235567" is "Ready"
	I1222 01:34:32.029692 2118210 node_ready.go:38] duration metric: took 12.503145821s for node "embed-certs-235567" to be "Ready" ...
	I1222 01:34:32.029705 2118210 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:34:32.029768 2118210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:34:32.041697 2118210 api_server.go:72] duration metric: took 13.573889752s to wait for apiserver process to appear ...
	I1222 01:34:32.041724 2118210 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:34:32.041743 2118210 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 01:34:32.050889 2118210 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1222 01:34:32.051985 2118210 api_server.go:141] control plane version: v1.34.3
	I1222 01:34:32.052010 2118210 api_server.go:131] duration metric: took 10.279508ms to wait for apiserver health ...
	I1222 01:34:32.052021 2118210 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:34:32.055338 2118210 system_pods.go:59] 8 kube-system pods found
	I1222 01:34:32.055376 2118210 system_pods.go:61] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:34:32.055384 2118210 system_pods.go:61] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running
	I1222 01:34:32.055390 2118210 system_pods.go:61] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:34:32.055394 2118210 system_pods.go:61] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running
	I1222 01:34:32.055399 2118210 system_pods.go:61] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running
	I1222 01:34:32.055403 2118210 system_pods.go:61] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running
	I1222 01:34:32.055406 2118210 system_pods.go:61] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running
	I1222 01:34:32.055418 2118210 system_pods.go:61] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:34:32.055426 2118210 system_pods.go:74] duration metric: took 3.400291ms to wait for pod list to return data ...
	I1222 01:34:32.055440 2118210 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:34:32.058108 2118210 default_sa.go:45] found service account: "default"
	I1222 01:34:32.058133 2118210 default_sa.go:55] duration metric: took 2.687139ms for default service account to be created ...
	I1222 01:34:32.058143 2118210 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:34:32.061201 2118210 system_pods.go:86] 8 kube-system pods found
	I1222 01:34:32.061237 2118210 system_pods.go:89] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:34:32.061246 2118210 system_pods.go:89] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running
	I1222 01:34:32.061253 2118210 system_pods.go:89] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:34:32.061258 2118210 system_pods.go:89] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running
	I1222 01:34:32.061263 2118210 system_pods.go:89] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running
	I1222 01:34:32.061267 2118210 system_pods.go:89] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running
	I1222 01:34:32.061272 2118210 system_pods.go:89] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running
	I1222 01:34:32.061279 2118210 system_pods.go:89] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:34:32.061314 2118210 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1222 01:34:32.332666 2118210 system_pods.go:86] 8 kube-system pods found
	I1222 01:34:32.332709 2118210 system_pods.go:89] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:34:32.332721 2118210 system_pods.go:89] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running
	I1222 01:34:32.332728 2118210 system_pods.go:89] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:34:32.332732 2118210 system_pods.go:89] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running
	I1222 01:34:32.332743 2118210 system_pods.go:89] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running
	I1222 01:34:32.332755 2118210 system_pods.go:89] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running
	I1222 01:34:32.332762 2118210 system_pods.go:89] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running
	I1222 01:34:32.332771 2118210 system_pods.go:89] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:34:32.655397 2118210 system_pods.go:86] 8 kube-system pods found
	I1222 01:34:32.655435 2118210 system_pods.go:89] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:34:32.655444 2118210 system_pods.go:89] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running
	I1222 01:34:32.655459 2118210 system_pods.go:89] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:34:32.655465 2118210 system_pods.go:89] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running
	I1222 01:34:32.655471 2118210 system_pods.go:89] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running
	I1222 01:34:32.655475 2118210 system_pods.go:89] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running
	I1222 01:34:32.655480 2118210 system_pods.go:89] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running
	I1222 01:34:32.655491 2118210 system_pods.go:89] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:34:33.082042 2118210 system_pods.go:86] 8 kube-system pods found
	I1222 01:34:33.082076 2118210 system_pods.go:89] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Running
	I1222 01:34:33.082084 2118210 system_pods.go:89] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running
	I1222 01:34:33.082088 2118210 system_pods.go:89] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:34:33.082093 2118210 system_pods.go:89] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running
	I1222 01:34:33.082119 2118210 system_pods.go:89] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running
	I1222 01:34:33.082133 2118210 system_pods.go:89] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running
	I1222 01:34:33.082138 2118210 system_pods.go:89] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running
	I1222 01:34:33.082142 2118210 system_pods.go:89] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Running
	I1222 01:34:33.082149 2118210 system_pods.go:126] duration metric: took 1.024000318s to wait for k8s-apps to be running ...
	I1222 01:34:33.082157 2118210 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:34:33.082226 2118210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:34:33.097713 2118210 system_svc.go:56] duration metric: took 15.545721ms WaitForService to wait for kubelet
	I1222 01:34:33.097789 2118210 kubeadm.go:587] duration metric: took 14.629985855s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:34:33.097823 2118210 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:34:33.103692 2118210 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:34:33.104122 2118210 node_conditions.go:123] node cpu capacity is 2
	I1222 01:34:33.104137 2118210 node_conditions.go:105] duration metric: took 6.296285ms to run NodePressure ...
	I1222 01:34:33.104151 2118210 start.go:242] waiting for startup goroutines ...
	I1222 01:34:33.104159 2118210 start.go:247] waiting for cluster config update ...
	I1222 01:34:33.104234 2118210 start.go:256] writing updated cluster config ...
	I1222 01:34:33.104703 2118210 ssh_runner.go:195] Run: rm -f paused
	I1222 01:34:33.108438 2118210 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:34:33.112009 2118210 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q5hbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:33.116991 2118210 pod_ready.go:94] pod "coredns-66bc5c9577-q5hbz" is "Ready"
	I1222 01:34:33.117019 2118210 pod_ready.go:86] duration metric: took 4.9279ms for pod "coredns-66bc5c9577-q5hbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:33.119311 2118210 pod_ready.go:83] waiting for pod "etcd-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:33.123839 2118210 pod_ready.go:94] pod "etcd-embed-certs-235567" is "Ready"
	I1222 01:34:33.123867 2118210 pod_ready.go:86] duration metric: took 4.529939ms for pod "etcd-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:33.126374 2118210 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:33.130879 2118210 pod_ready.go:94] pod "kube-apiserver-embed-certs-235567" is "Ready"
	I1222 01:34:33.130909 2118210 pod_ready.go:86] duration metric: took 4.511986ms for pod "kube-apiserver-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:33.133154 2118210 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:33.513313 2118210 pod_ready.go:94] pod "kube-controller-manager-embed-certs-235567" is "Ready"
	I1222 01:34:33.513414 2118210 pod_ready.go:86] duration metric: took 380.232102ms for pod "kube-controller-manager-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:33.712519 2118210 pod_ready.go:83] waiting for pod "kube-proxy-shsn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:34.112727 2118210 pod_ready.go:94] pod "kube-proxy-shsn4" is "Ready"
	I1222 01:34:34.112762 2118210 pod_ready.go:86] duration metric: took 400.219822ms for pod "kube-proxy-shsn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:34.313185 2118210 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:34.712148 2118210 pod_ready.go:94] pod "kube-scheduler-embed-certs-235567" is "Ready"
	I1222 01:34:34.712178 2118210 pod_ready.go:86] duration metric: took 398.964187ms for pod "kube-scheduler-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:34:34.712192 2118210 pod_ready.go:40] duration metric: took 1.603721136s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:34:34.766233 2118210 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:34:34.769476 2118210 out.go:179] * Done! kubectl is now configured to use "embed-certs-235567" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:34:32 embed-certs-235567 crio[838]: time="2025-12-22T01:34:32.158057316Z" level=info msg="Created container ba7d06ec1f0f655f93c7688600a8ae46ab9446a6ad7b68829e47cbce7f20ef95: kube-system/coredns-66bc5c9577-q5hbz/coredns" id=8c0d99d9-9eb5-4696-9a16-2ff301a878bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:34:32 embed-certs-235567 crio[838]: time="2025-12-22T01:34:32.158959992Z" level=info msg="Starting container: ba7d06ec1f0f655f93c7688600a8ae46ab9446a6ad7b68829e47cbce7f20ef95" id=202612a3-cac1-4d18-8219-7f4f236dcb24 name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:34:32 embed-certs-235567 crio[838]: time="2025-12-22T01:34:32.160843907Z" level=info msg="Started container" PID=1778 containerID=ba7d06ec1f0f655f93c7688600a8ae46ab9446a6ad7b68829e47cbce7f20ef95 description=kube-system/coredns-66bc5c9577-q5hbz/coredns id=202612a3-cac1-4d18-8219-7f4f236dcb24 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1ad785f1ea3b51d137236d9d6e8792512ff70587bdf0ba148860f3e02577552
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.28692592Z" level=info msg="Running pod sandbox: default/busybox/POD" id=903ae2d0-6fb9-4c5b-8480-720e7f8daa1e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.28701953Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.292620394Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cb4e5ee029d1fd93e541e7bf54bec4a97fefdd01df17489fda7c9232826758fd UID:7bd951a1-ccf0-44f9-87aa-0d7ba288da1e NetNS:/var/run/netns/d38cdf34-2709-4da2-886a-871d1bc798be Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012de28}] Aliases:map[]}"
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.292804988Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.303492295Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cb4e5ee029d1fd93e541e7bf54bec4a97fefdd01df17489fda7c9232826758fd UID:7bd951a1-ccf0-44f9-87aa-0d7ba288da1e NetNS:/var/run/netns/d38cdf34-2709-4da2-886a-871d1bc798be Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012de28}] Aliases:map[]}"
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.303656877Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.307559266Z" level=info msg="Ran pod sandbox cb4e5ee029d1fd93e541e7bf54bec4a97fefdd01df17489fda7c9232826758fd with infra container: default/busybox/POD" id=903ae2d0-6fb9-4c5b-8480-720e7f8daa1e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.308895595Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e6599141-1cf0-4114-a0f0-a5e1e4d98c19 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.309114755Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e6599141-1cf0-4114-a0f0-a5e1e4d98c19 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.309163213Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e6599141-1cf0-4114-a0f0-a5e1e4d98c19 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.311430829Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eebded9d-03a4-4304-9605-e4d498999303 name=/runtime.v1.ImageService/PullImage
	Dec 22 01:34:35 embed-certs-235567 crio[838]: time="2025-12-22T01:34:35.312834397Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.305439179Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=eebded9d-03a4-4304-9605-e4d498999303 name=/runtime.v1.ImageService/PullImage
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.306516201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=60be1c53-4551-4d84-9d98-f044312be144 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.308205807Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=454def1b-d0be-4622-a1dd-2749adacadae name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.314805763Z" level=info msg="Creating container: default/busybox/busybox" id=5261d33e-9354-4103-afbb-53134dbe89a1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.314910294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.319530913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.319981213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.33793313Z" level=info msg="Created container 4ce4d6c8360c2f2dc6242d3d17953b174c4e570e6189180c2393b72cfd124aa0: default/busybox/busybox" id=5261d33e-9354-4103-afbb-53134dbe89a1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.338936767Z" level=info msg="Starting container: 4ce4d6c8360c2f2dc6242d3d17953b174c4e570e6189180c2393b72cfd124aa0" id=5afae4a2-1864-423f-8102-899c4192575b name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:34:37 embed-certs-235567 crio[838]: time="2025-12-22T01:34:37.341611189Z" level=info msg="Started container" PID=1830 containerID=4ce4d6c8360c2f2dc6242d3d17953b174c4e570e6189180c2393b72cfd124aa0 description=default/busybox/busybox id=5afae4a2-1864-423f-8102-899c4192575b name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb4e5ee029d1fd93e541e7bf54bec4a97fefdd01df17489fda7c9232826758fd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	4ce4d6c8360c2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   cb4e5ee029d1f       busybox                                      default
	ba7d06ec1f0f6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   b1ad785f1ea3b       coredns-66bc5c9577-q5hbz                     kube-system
	9c7df3798341c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   39a5a9fc560e8       storage-provisioner                          kube-system
	468644a694140       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   d2c2c0bb41fdb       kindnet-mkqcr                                kube-system
	88e0600d355a5       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                      26 seconds ago      Running             kube-proxy                0                   3c70bc13a643c       kube-proxy-shsn4                             kube-system
	0105f478e8ab8       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                      40 seconds ago      Running             kube-scheduler            0                   1bd43f9abbcb2       kube-scheduler-embed-certs-235567            kube-system
	f5146b21e8a68       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                      40 seconds ago      Running             kube-controller-manager   0                   2fbf0095aa05d       kube-controller-manager-embed-certs-235567   kube-system
	ce082216809b9       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                      40 seconds ago      Running             kube-apiserver            0                   d0a0a5d0fcd06       kube-apiserver-embed-certs-235567            kube-system
	d972f5132e986       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      40 seconds ago      Running             etcd                      0                   8f7fbbf64c13b       etcd-embed-certs-235567                      kube-system
	
	
	==> coredns [ba7d06ec1f0f655f93c7688600a8ae46ab9446a6ad7b68829e47cbce7f20ef95] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46966 - 33858 "HINFO IN 3349375207043799163.7334124661940480966. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021015193s
	
	
	==> describe nodes <==
	Name:               embed-certs-235567
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-235567
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=embed-certs-235567
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_34_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-235567
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:34:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:34:43 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:34:43 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:34:43 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:34:43 +0000   Mon, 22 Dec 2025 01:34:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-235567
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                d4644b9d-9d43-4a69-8d1b-d5cf0da8245d
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-q5hbz                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-embed-certs-235567                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-mkqcr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-235567             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-235567    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-shsn4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-235567             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 26s   kube-proxy       
	  Normal   Starting                 33s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s   kubelet          Node embed-certs-235567 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s   kubelet          Node embed-certs-235567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s   kubelet          Node embed-certs-235567 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s   node-controller  Node embed-certs-235567 event: Registered Node embed-certs-235567 in Controller
	  Normal   NodeReady                14s   kubelet          Node embed-certs-235567 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec22 01:04] overlayfs: idmapped layers are currently not supported
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d972f5132e9863bff55b32b82de1d1248f5c1fcf7607aeb659f0910535dab244] <==
	{"level":"warn","ts":"2025-12-22T01:34:08.401912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.417799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.459744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.486065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.527405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.560077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.577192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.645302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.673069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.707338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.722781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.745948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.763406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.776520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.800304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.815596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.834650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.851232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.871506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.886414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.906146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.946190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.980692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:08.998017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:34:09.091601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54208","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:34:45 up 1 day,  8:17,  0 user,  load average: 2.63, 2.98, 2.46
	Linux embed-certs-235567 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [468644a694140a6f439f1ef06f5881479302019a01bb29a8dd2d8643cc6b9a57] <==
	I1222 01:34:21.324575       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:34:21.324781       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:34:21.324914       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:34:21.324934       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:34:21.324948       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:34:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:34:21.525218       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:34:21.526241       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:34:21.526849       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:34:21.527020       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1222 01:34:21.727170       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:34:21.727194       1 metrics.go:72] Registering metrics
	I1222 01:34:21.727248       1 controller.go:711] "Syncing nftables rules"
	I1222 01:34:31.529434       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:34:31.529551       1 main.go:301] handling current node
	I1222 01:34:41.526684       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:34:41.526718       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce082216809b9fdad93ee317c9b740f67a06367ed98d049a39cc367eab01f65f] <==
	I1222 01:34:10.339754       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1222 01:34:10.339819       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1222 01:34:10.344777       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1222 01:34:10.354455       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:34:10.354588       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1222 01:34:10.358028       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:34:10.385307       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:34:10.393660       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1222 01:34:10.891329       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1222 01:34:10.897729       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1222 01:34:10.897757       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:34:11.634249       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:34:11.695661       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:34:11.842305       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1222 01:34:11.854680       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1222 01:34:11.855841       1 controller.go:667] quota admission added evaluator for: endpoints
	I1222 01:34:11.860782       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:34:12.732751       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1222 01:34:12.822695       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1222 01:34:12.852511       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1222 01:34:12.885217       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1222 01:34:18.711233       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:34:18.720171       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1222 01:34:18.722167       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:34:18.816187       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f5146b21e8a68054b256ade5d4afe44581f864eb998457856a1d626060c03d76] <==
	I1222 01:34:17.740124       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1222 01:34:17.740667       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:34:17.751290       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1222 01:34:17.762046       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-235567" podCIDRs=["10.244.0.0/24"]
	I1222 01:34:17.762468       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1222 01:34:17.769829       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:34:17.773549       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1222 01:34:17.776829       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:34:17.776913       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1222 01:34:17.776944       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1222 01:34:17.780378       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1222 01:34:17.780392       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:34:17.780514       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1222 01:34:17.780567       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1222 01:34:17.780601       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1222 01:34:17.780639       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1222 01:34:17.780668       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1222 01:34:17.780763       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1222 01:34:17.780974       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:34:17.780998       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1222 01:34:17.782911       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1222 01:34:17.786154       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1222 01:34:17.789394       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1222 01:34:17.800911       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:34:32.729632       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [88e0600d355a58095b93cbb30b229057982a36e959258a9284dadd0403e64f00] <==
	I1222 01:34:19.415056       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:34:19.499247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:34:19.600301       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:34:19.600344       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1222 01:34:19.600442       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:34:19.717456       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:34:19.717506       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:34:19.735144       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:34:19.735405       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:34:19.735419       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:34:19.738170       1 config.go:200] "Starting service config controller"
	I1222 01:34:19.738225       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:34:19.738243       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:34:19.738248       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:34:19.738264       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:34:19.738268       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:34:19.738849       1 config.go:309] "Starting node config controller"
	I1222 01:34:19.738855       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:34:19.738861       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:34:19.838371       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1222 01:34:19.838403       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:34:19.838457       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0105f478e8ab8e84ef2973ee184a8031f189c66f7f9cae694abe6f76330b88ea] <==
	I1222 01:34:10.802254       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:34:10.804339       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:34:10.804375       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:34:10.804786       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 01:34:10.804881       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1222 01:34:10.805800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1222 01:34:10.817739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1222 01:34:10.821162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1222 01:34:10.821258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1222 01:34:10.821308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1222 01:34:10.821178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1222 01:34:10.821406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 01:34:10.821544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1222 01:34:10.821590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1222 01:34:10.821601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1222 01:34:10.821634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 01:34:10.821701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1222 01:34:10.821750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1222 01:34:10.821845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1222 01:34:10.821845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1222 01:34:10.821954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1222 01:34:10.822034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1222 01:34:10.822162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1222 01:34:10.822582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1222 01:34:12.305039       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 22 01:34:17 embed-certs-235567 kubelet[1303]: I1222 01:34:17.811004    1303 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 22 01:34:17 embed-certs-235567 kubelet[1303]: I1222 01:34:17.811568    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 22 01:34:18 embed-certs-235567 kubelet[1303]: I1222 01:34:18.964145    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9584571-8e4f-41dc-a968-d5eb5e4f3d10-kube-proxy\") pod \"kube-proxy-shsn4\" (UID: \"c9584571-8e4f-41dc-a968-d5eb5e4f3d10\") " pod="kube-system/kube-proxy-shsn4"
	Dec 22 01:34:18 embed-certs-235567 kubelet[1303]: I1222 01:34:18.964194    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9584571-8e4f-41dc-a968-d5eb5e4f3d10-xtables-lock\") pod \"kube-proxy-shsn4\" (UID: \"c9584571-8e4f-41dc-a968-d5eb5e4f3d10\") " pod="kube-system/kube-proxy-shsn4"
	Dec 22 01:34:18 embed-certs-235567 kubelet[1303]: I1222 01:34:18.964219    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49rlb\" (UniqueName: \"kubernetes.io/projected/c9584571-8e4f-41dc-a968-d5eb5e4f3d10-kube-api-access-49rlb\") pod \"kube-proxy-shsn4\" (UID: \"c9584571-8e4f-41dc-a968-d5eb5e4f3d10\") " pod="kube-system/kube-proxy-shsn4"
	Dec 22 01:34:18 embed-certs-235567 kubelet[1303]: I1222 01:34:18.964255    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a59d7061-3de4-4c18-ae24-a8087309931e-cni-cfg\") pod \"kindnet-mkqcr\" (UID: \"a59d7061-3de4-4c18-ae24-a8087309931e\") " pod="kube-system/kindnet-mkqcr"
	Dec 22 01:34:18 embed-certs-235567 kubelet[1303]: I1222 01:34:18.964290    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a59d7061-3de4-4c18-ae24-a8087309931e-xtables-lock\") pod \"kindnet-mkqcr\" (UID: \"a59d7061-3de4-4c18-ae24-a8087309931e\") " pod="kube-system/kindnet-mkqcr"
	Dec 22 01:34:18 embed-certs-235567 kubelet[1303]: I1222 01:34:18.964312    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a59d7061-3de4-4c18-ae24-a8087309931e-lib-modules\") pod \"kindnet-mkqcr\" (UID: \"a59d7061-3de4-4c18-ae24-a8087309931e\") " pod="kube-system/kindnet-mkqcr"
	Dec 22 01:34:18 embed-certs-235567 kubelet[1303]: I1222 01:34:18.964333    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwp6f\" (UniqueName: \"kubernetes.io/projected/a59d7061-3de4-4c18-ae24-a8087309931e-kube-api-access-hwp6f\") pod \"kindnet-mkqcr\" (UID: \"a59d7061-3de4-4c18-ae24-a8087309931e\") " pod="kube-system/kindnet-mkqcr"
	Dec 22 01:34:18 embed-certs-235567 kubelet[1303]: I1222 01:34:18.964355    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9584571-8e4f-41dc-a968-d5eb5e4f3d10-lib-modules\") pod \"kube-proxy-shsn4\" (UID: \"c9584571-8e4f-41dc-a968-d5eb5e4f3d10\") " pod="kube-system/kube-proxy-shsn4"
	Dec 22 01:34:19 embed-certs-235567 kubelet[1303]: I1222 01:34:19.185698    1303 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 22 01:34:19 embed-certs-235567 kubelet[1303]: W1222 01:34:19.292772    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/crio-3c70bc13a643c3ee731330ab339b598e352f530702bcd530a7adda2b0b96f32c WatchSource:0}: Error finding container 3c70bc13a643c3ee731330ab339b598e352f530702bcd530a7adda2b0b96f32c: Status 404 returned error can't find the container with id 3c70bc13a643c3ee731330ab339b598e352f530702bcd530a7adda2b0b96f32c
	Dec 22 01:34:21 embed-certs-235567 kubelet[1303]: I1222 01:34:21.996266    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-shsn4" podStartSLOduration=3.996233734 podStartE2EDuration="3.996233734s" podCreationTimestamp="2025-12-22 01:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:34:20.013891775 +0000 UTC m=+7.344860632" watchObservedRunningTime="2025-12-22 01:34:21.996233734 +0000 UTC m=+9.327202493"
	Dec 22 01:34:21 embed-certs-235567 kubelet[1303]: I1222 01:34:21.996894    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mkqcr" podStartSLOduration=2.097064014 podStartE2EDuration="3.996884299s" podCreationTimestamp="2025-12-22 01:34:18 +0000 UTC" firstStartedPulling="2025-12-22 01:34:19.295478413 +0000 UTC m=+6.626447172" lastFinishedPulling="2025-12-22 01:34:21.19529869 +0000 UTC m=+8.526267457" observedRunningTime="2025-12-22 01:34:21.996203352 +0000 UTC m=+9.327172127" watchObservedRunningTime="2025-12-22 01:34:21.996884299 +0000 UTC m=+9.327853116"
	Dec 22 01:34:31 embed-certs-235567 kubelet[1303]: I1222 01:34:31.684663    1303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 22 01:34:31 embed-certs-235567 kubelet[1303]: I1222 01:34:31.855562    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e232d85-b68a-49ce-9efd-8a6981b86d96-tmp\") pod \"storage-provisioner\" (UID: \"1e232d85-b68a-49ce-9efd-8a6981b86d96\") " pod="kube-system/storage-provisioner"
	Dec 22 01:34:31 embed-certs-235567 kubelet[1303]: I1222 01:34:31.855986    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpb7\" (UniqueName: \"kubernetes.io/projected/1e232d85-b68a-49ce-9efd-8a6981b86d96-kube-api-access-twpb7\") pod \"storage-provisioner\" (UID: \"1e232d85-b68a-49ce-9efd-8a6981b86d96\") " pod="kube-system/storage-provisioner"
	Dec 22 01:34:31 embed-certs-235567 kubelet[1303]: I1222 01:34:31.856106    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db8e9fbc-5c68-4e74-b834-1141e6a8e09a-config-volume\") pod \"coredns-66bc5c9577-q5hbz\" (UID: \"db8e9fbc-5c68-4e74-b834-1141e6a8e09a\") " pod="kube-system/coredns-66bc5c9577-q5hbz"
	Dec 22 01:34:31 embed-certs-235567 kubelet[1303]: I1222 01:34:31.856225    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvhn\" (UniqueName: \"kubernetes.io/projected/db8e9fbc-5c68-4e74-b834-1141e6a8e09a-kube-api-access-9wvhn\") pod \"coredns-66bc5c9577-q5hbz\" (UID: \"db8e9fbc-5c68-4e74-b834-1141e6a8e09a\") " pod="kube-system/coredns-66bc5c9577-q5hbz"
	Dec 22 01:34:32 embed-certs-235567 kubelet[1303]: W1222 01:34:32.085214    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/crio-39a5a9fc560e818895dcb7c9d4c38e42b33ac7eb2ac6c28165aaab5f5a324c1a WatchSource:0}: Error finding container 39a5a9fc560e818895dcb7c9d4c38e42b33ac7eb2ac6c28165aaab5f5a324c1a: Status 404 returned error can't find the container with id 39a5a9fc560e818895dcb7c9d4c38e42b33ac7eb2ac6c28165aaab5f5a324c1a
	Dec 22 01:34:32 embed-certs-235567 kubelet[1303]: W1222 01:34:32.108436    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/crio-b1ad785f1ea3b51d137236d9d6e8792512ff70587bdf0ba148860f3e02577552 WatchSource:0}: Error finding container b1ad785f1ea3b51d137236d9d6e8792512ff70587bdf0ba148860f3e02577552: Status 404 returned error can't find the container with id b1ad785f1ea3b51d137236d9d6e8792512ff70587bdf0ba148860f3e02577552
	Dec 22 01:34:33 embed-certs-235567 kubelet[1303]: I1222 01:34:33.044482    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.044461507 podStartE2EDuration="14.044461507s" podCreationTimestamp="2025-12-22 01:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:34:33.031152797 +0000 UTC m=+20.362121564" watchObservedRunningTime="2025-12-22 01:34:33.044461507 +0000 UTC m=+20.375430274"
	Dec 22 01:34:34 embed-certs-235567 kubelet[1303]: I1222 01:34:34.969690    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-q5hbz" podStartSLOduration=16.969662238 podStartE2EDuration="16.969662238s" podCreationTimestamp="2025-12-22 01:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:34:33.045874912 +0000 UTC m=+20.376843671" watchObservedRunningTime="2025-12-22 01:34:34.969662238 +0000 UTC m=+22.300631013"
	Dec 22 01:34:35 embed-certs-235567 kubelet[1303]: I1222 01:34:35.078445    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnhd\" (UniqueName: \"kubernetes.io/projected/7bd951a1-ccf0-44f9-87aa-0d7ba288da1e-kube-api-access-qgnhd\") pod \"busybox\" (UID: \"7bd951a1-ccf0-44f9-87aa-0d7ba288da1e\") " pod="default/busybox"
	Dec 22 01:34:35 embed-certs-235567 kubelet[1303]: W1222 01:34:35.307971    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/crio-cb4e5ee029d1fd93e541e7bf54bec4a97fefdd01df17489fda7c9232826758fd WatchSource:0}: Error finding container cb4e5ee029d1fd93e541e7bf54bec4a97fefdd01df17489fda7c9232826758fd: Status 404 returned error can't find the container with id cb4e5ee029d1fd93e541e7bf54bec4a97fefdd01df17489fda7c9232826758fd
	
	
	==> storage-provisioner [9c7df3798341c4de455333b62f4c77a7c5844c1c80112750bffebd1c1d4722a4] <==
	I1222 01:34:32.150764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:34:32.187803       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:34:32.187879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1222 01:34:32.194294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:32.213457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:34:32.213634       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:34:32.213816       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-235567_a224ba55-6250-4628-a994-ae2e4a2b84b2!
	I1222 01:34:32.214717       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d572cc3c-b104-421c-a93c-f4c2c45f34e5", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-235567_a224ba55-6250-4628-a994-ae2e4a2b84b2 became leader
	W1222 01:34:32.223430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:32.229609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:34:32.314419       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-235567_a224ba55-6250-4628-a994-ae2e4a2b84b2!
	W1222 01:34:34.233031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:34.237591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:36.240930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:36.248945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:38.251962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:38.256228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:40.259263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:40.266272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:42.271128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:42.277164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:44.280657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:34:44.285177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-235567 -n embed-certs-235567
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-235567 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-235567 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-235567 --alsologtostderr -v=1: exit status 80 (1.933335604s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-235567 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:36:00.832341 2124774 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:36:00.832465 2124774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:00.832482 2124774 out.go:374] Setting ErrFile to fd 2...
	I1222 01:36:00.832489 2124774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:00.832758 2124774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:36:00.833022 2124774 out.go:368] Setting JSON to false
	I1222 01:36:00.833047 2124774 mustload.go:66] Loading cluster: embed-certs-235567
	I1222 01:36:00.833500 2124774 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:36:00.834018 2124774 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:36:00.851880 2124774 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:36:00.852216 2124774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:00.914935 2124774 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-22 01:36:00.905187846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:00.915582 2124774 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-235567 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotifi
cation:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1222 01:36:00.919082 2124774 out.go:179] * Pausing node embed-certs-235567 ... 
	I1222 01:36:00.921988 2124774 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:36:00.922310 2124774 ssh_runner.go:195] Run: systemctl --version
	I1222 01:36:00.922357 2124774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:36:00.940048 2124774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:36:01.036262 2124774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:36:01.055536 2124774 pause.go:52] kubelet running: true
	I1222 01:36:01.055615 2124774 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:36:01.336714 2124774 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:36:01.336795 2124774 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:36:01.407379 2124774 cri.go:96] found id: "d61c20e397c0328b0fd6e7bd1535b2a978028501bf444674fce6879659e6ae3b"
	I1222 01:36:01.407402 2124774 cri.go:96] found id: "65b574438a73e2b2eefd38c4e7e07643ee4f34520f64175c82322af50cf08c67"
	I1222 01:36:01.407407 2124774 cri.go:96] found id: "43dcbf79e9d4272e9576e394ab66d96aa9a891411c988a1f8f0c7ddaf6aaed7c"
	I1222 01:36:01.407411 2124774 cri.go:96] found id: "6e7e75c0138f0dafd4e46397e222ce458e1253549363ec3b3c0b35b0eb5a3e22"
	I1222 01:36:01.407414 2124774 cri.go:96] found id: "61111522e0f72b777c735bc37c123c749993783fad4a5178055cf076026c6073"
	I1222 01:36:01.407418 2124774 cri.go:96] found id: "241d1760e627abc7a3d344ac05e9a57f9c7451cb29646bdefe78bfbc841f5c11"
	I1222 01:36:01.407421 2124774 cri.go:96] found id: "21f70a0897f7814aab2cb5d3f2434693fd5a4d6832c30cb70742dc4fa69e6e5d"
	I1222 01:36:01.407425 2124774 cri.go:96] found id: "1b831e8cc8f3d0a0cb574d1962397875204d05857734e2b9f7aae161ea35fe4e"
	I1222 01:36:01.407428 2124774 cri.go:96] found id: "bef725e3057a75230c765ab506197a6b277dd374b7da1670267592ddadf181e3"
	I1222 01:36:01.407433 2124774 cri.go:96] found id: "09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207"
	I1222 01:36:01.407438 2124774 cri.go:96] found id: "611a97bfc2a29f0e10c0196b1d3eba98b24c290267f58aab6f754d2c388f96e7"
	I1222 01:36:01.407448 2124774 cri.go:96] found id: ""
	I1222 01:36:01.407499 2124774 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:36:01.421922 2124774 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:36:01Z" level=error msg="open /run/runc: no such file or directory"
	I1222 01:36:01.736525 2124774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:36:01.749749 2124774 pause.go:52] kubelet running: false
	I1222 01:36:01.749816 2124774 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:36:01.930377 2124774 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:36:01.930566 2124774 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:36:02.012488 2124774 cri.go:96] found id: "d61c20e397c0328b0fd6e7bd1535b2a978028501bf444674fce6879659e6ae3b"
	I1222 01:36:02.012523 2124774 cri.go:96] found id: "65b574438a73e2b2eefd38c4e7e07643ee4f34520f64175c82322af50cf08c67"
	I1222 01:36:02.012528 2124774 cri.go:96] found id: "43dcbf79e9d4272e9576e394ab66d96aa9a891411c988a1f8f0c7ddaf6aaed7c"
	I1222 01:36:02.012532 2124774 cri.go:96] found id: "6e7e75c0138f0dafd4e46397e222ce458e1253549363ec3b3c0b35b0eb5a3e22"
	I1222 01:36:02.012535 2124774 cri.go:96] found id: "61111522e0f72b777c735bc37c123c749993783fad4a5178055cf076026c6073"
	I1222 01:36:02.012538 2124774 cri.go:96] found id: "241d1760e627abc7a3d344ac05e9a57f9c7451cb29646bdefe78bfbc841f5c11"
	I1222 01:36:02.012541 2124774 cri.go:96] found id: "21f70a0897f7814aab2cb5d3f2434693fd5a4d6832c30cb70742dc4fa69e6e5d"
	I1222 01:36:02.012570 2124774 cri.go:96] found id: "1b831e8cc8f3d0a0cb574d1962397875204d05857734e2b9f7aae161ea35fe4e"
	I1222 01:36:02.012582 2124774 cri.go:96] found id: "bef725e3057a75230c765ab506197a6b277dd374b7da1670267592ddadf181e3"
	I1222 01:36:02.012596 2124774 cri.go:96] found id: "09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207"
	I1222 01:36:02.012603 2124774 cri.go:96] found id: "611a97bfc2a29f0e10c0196b1d3eba98b24c290267f58aab6f754d2c388f96e7"
	I1222 01:36:02.012606 2124774 cri.go:96] found id: ""
	I1222 01:36:02.012681 2124774 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:36:02.362629 2124774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:36:02.375779 2124774 pause.go:52] kubelet running: false
	I1222 01:36:02.375846 2124774 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:36:02.568816 2124774 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:36:02.568935 2124774 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:36:02.670319 2124774 cri.go:96] found id: "d61c20e397c0328b0fd6e7bd1535b2a978028501bf444674fce6879659e6ae3b"
	I1222 01:36:02.670399 2124774 cri.go:96] found id: "65b574438a73e2b2eefd38c4e7e07643ee4f34520f64175c82322af50cf08c67"
	I1222 01:36:02.670411 2124774 cri.go:96] found id: "43dcbf79e9d4272e9576e394ab66d96aa9a891411c988a1f8f0c7ddaf6aaed7c"
	I1222 01:36:02.670418 2124774 cri.go:96] found id: "6e7e75c0138f0dafd4e46397e222ce458e1253549363ec3b3c0b35b0eb5a3e22"
	I1222 01:36:02.670422 2124774 cri.go:96] found id: "61111522e0f72b777c735bc37c123c749993783fad4a5178055cf076026c6073"
	I1222 01:36:02.670425 2124774 cri.go:96] found id: "241d1760e627abc7a3d344ac05e9a57f9c7451cb29646bdefe78bfbc841f5c11"
	I1222 01:36:02.670428 2124774 cri.go:96] found id: "21f70a0897f7814aab2cb5d3f2434693fd5a4d6832c30cb70742dc4fa69e6e5d"
	I1222 01:36:02.670431 2124774 cri.go:96] found id: "1b831e8cc8f3d0a0cb574d1962397875204d05857734e2b9f7aae161ea35fe4e"
	I1222 01:36:02.670434 2124774 cri.go:96] found id: "bef725e3057a75230c765ab506197a6b277dd374b7da1670267592ddadf181e3"
	I1222 01:36:02.670440 2124774 cri.go:96] found id: "09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207"
	I1222 01:36:02.670443 2124774 cri.go:96] found id: "611a97bfc2a29f0e10c0196b1d3eba98b24c290267f58aab6f754d2c388f96e7"
	I1222 01:36:02.670447 2124774 cri.go:96] found id: ""
	I1222 01:36:02.670540 2124774 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:36:02.686560 2124774 out.go:203] 
	W1222 01:36:02.689460 2124774 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:36:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:36:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 01:36:02.689524 2124774 out.go:285] * 
	* 
	W1222 01:36:02.707661 2124774 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:36:02.710713 2124774 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-235567 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-235567
helpers_test.go:244: (dbg) docker inspect embed-certs-235567:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6",
	        "Created": "2025-12-22T01:33:47.830445272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2122242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:34:59.143603171Z",
	            "FinishedAt": "2025-12-22T01:34:58.358096418Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/hosts",
	        "LogPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6-json.log",
	        "Name": "/embed-certs-235567",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-235567:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-235567",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6",
	                "LowerDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-235567",
	                "Source": "/var/lib/docker/volumes/embed-certs-235567/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-235567",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-235567",
	                "name.minikube.sigs.k8s.io": "embed-certs-235567",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bb2bd55148d0f6c10b9090d8e42bdf1f6940199965647043cf16695bcf80046",
	            "SandboxKey": "/var/run/docker/netns/0bb2bd55148d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38761"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38762"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38765"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38764"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-235567": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:e6:85:80:63:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6f3306e4c44399ff80f026f59f3fa2f6e6e98178df3c00a9bdf130d8ed5d4e77",
	                    "EndpointID": "80d7014065290cc2a301adcf0bb90b6a5972ba089b1a6861ca38dfdab1e105b6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-235567",
	                        "4081720ccc5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-235567 -n embed-certs-235567
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-235567 -n embed-certs-235567: exit status 2 (331.706555ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-235567 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-235567 logs -n 25: (1.287414651s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ force-systemd-flag-314223 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p force-systemd-flag-314223                                                                                                                                                                                                                  │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ start   │ -p cert-options-433911 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ cert-options-433911 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ -p cert-options-433911 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p cert-options-433911                                                                                                                                                                                                                        │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:31 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:31 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ stop    │ -p old-k8s-version-535197 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-535197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ delete  │ -p cert-expiration-031738                                                                                                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-491054         │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                               │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                   │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:34:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:34:58.877432 2122110 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:34:58.877541 2122110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:34:58.877551 2122110 out.go:374] Setting ErrFile to fd 2...
	I1222 01:34:58.877556 2122110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:34:58.877991 2122110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:34:58.878443 2122110 out.go:368] Setting JSON to false
	I1222 01:34:58.879377 2122110 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116248,"bootTime":1766251051,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:34:58.879466 2122110 start.go:143] virtualization:  
	I1222 01:34:58.882418 2122110 out.go:179] * [embed-certs-235567] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:34:58.884766 2122110 notify.go:221] Checking for updates...
	I1222 01:34:58.885259 2122110 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:34:58.888148 2122110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:34:58.891081 2122110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:34:58.893978 2122110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:34:58.896876 2122110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:34:58.899990 2122110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:34:58.903465 2122110 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:34:58.904062 2122110 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:34:58.935161 2122110 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:34:58.935323 2122110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:34:58.994461 2122110 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:34:58.98529114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:34:58.994567 2122110 docker.go:319] overlay module found
	I1222 01:34:58.997712 2122110 out.go:179] * Using the docker driver based on existing profile
	I1222 01:34:59.001556 2122110 start.go:309] selected driver: docker
	I1222 01:34:59.001592 2122110 start.go:928] validating driver "docker" against &{Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:34:59.001709 2122110 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:34:59.002608 2122110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:34:59.058353 2122110 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:34:59.049155876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:34:59.058659 2122110 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:34:59.058687 2122110 cni.go:84] Creating CNI manager for ""
	I1222 01:34:59.058742 2122110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:34:59.058784 2122110 start.go:353] cluster config:
	{Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:34:59.062013 2122110 out.go:179] * Starting "embed-certs-235567" primary control-plane node in "embed-certs-235567" cluster
	I1222 01:34:59.064896 2122110 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:34:59.067876 2122110 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:34:59.070716 2122110 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:34:59.070760 2122110 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:34:59.070769 2122110 cache.go:65] Caching tarball of preloaded images
	I1222 01:34:59.070811 2122110 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:34:59.070849 2122110 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:34:59.070858 2122110 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:34:59.070975 2122110 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/config.json ...
	I1222 01:34:59.089986 2122110 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:34:59.090008 2122110 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:34:59.090027 2122110 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:34:59.090055 2122110 start.go:360] acquireMachinesLock for embed-certs-235567: {Name:mk48f2965e632f2a10179742c54c3729e95c3cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:34:59.090119 2122110 start.go:364] duration metric: took 38.103µs to acquireMachinesLock for "embed-certs-235567"
	I1222 01:34:59.090143 2122110 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:34:59.090152 2122110 fix.go:54] fixHost starting: 
	I1222 01:34:59.090415 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:34:59.106986 2122110 fix.go:112] recreateIfNeeded on embed-certs-235567: state=Stopped err=<nil>
	W1222 01:34:59.107026 2122110 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:34:59.110325 2122110 out.go:252] * Restarting existing docker container for "embed-certs-235567" ...
	I1222 01:34:59.110405 2122110 cli_runner.go:164] Run: docker start embed-certs-235567
	I1222 01:34:59.342708 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:34:59.364901 2122110 kic.go:430] container "embed-certs-235567" state is running.
	I1222 01:34:59.366477 2122110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:34:59.390010 2122110 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/config.json ...
	I1222 01:34:59.390389 2122110 machine.go:94] provisionDockerMachine start ...
	I1222 01:34:59.390469 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:34:59.414254 2122110 main.go:144] libmachine: Using SSH client type: native
	I1222 01:34:59.414584 2122110 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38761 <nil> <nil>}
	I1222 01:34:59.414594 2122110 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:34:59.415518 2122110 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:35:02.549121 2122110 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-235567
	
	I1222 01:35:02.549147 2122110 ubuntu.go:182] provisioning hostname "embed-certs-235567"
	I1222 01:35:02.549226 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:02.567396 2122110 main.go:144] libmachine: Using SSH client type: native
	I1222 01:35:02.567709 2122110 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38761 <nil> <nil>}
	I1222 01:35:02.567723 2122110 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-235567 && echo "embed-certs-235567" | sudo tee /etc/hostname
	I1222 01:35:02.724449 2122110 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-235567
	
	I1222 01:35:02.724574 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:02.743093 2122110 main.go:144] libmachine: Using SSH client type: native
	I1222 01:35:02.743400 2122110 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38761 <nil> <nil>}
	I1222 01:35:02.743430 2122110 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-235567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-235567/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-235567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:35:02.877962 2122110 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:35:02.877985 2122110 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:35:02.878008 2122110 ubuntu.go:190] setting up certificates
	I1222 01:35:02.878024 2122110 provision.go:84] configureAuth start
	I1222 01:35:02.878092 2122110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:35:02.897155 2122110 provision.go:143] copyHostCerts
	I1222 01:35:02.897225 2122110 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:35:02.897243 2122110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:35:02.897492 2122110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:35:02.897622 2122110 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:35:02.897636 2122110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:35:02.897671 2122110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:35:02.897746 2122110 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:35:02.897755 2122110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:35:02.897781 2122110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:35:02.897843 2122110 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.embed-certs-235567 san=[127.0.0.1 192.168.85.2 embed-certs-235567 localhost minikube]
	I1222 01:35:03.040349 2122110 provision.go:177] copyRemoteCerts
	I1222 01:35:03.040429 2122110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:35:03.040471 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.057545 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.153411 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1222 01:35:03.171075 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:35:03.188657 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:35:03.206086 2122110 provision.go:87] duration metric: took 328.037791ms to configureAuth
	I1222 01:35:03.206121 2122110 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:35:03.206324 2122110 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:35:03.206430 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.224365 2122110 main.go:144] libmachine: Using SSH client type: native
	I1222 01:35:03.224680 2122110 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38761 <nil> <nil>}
	I1222 01:35:03.224694 2122110 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:35:03.600797 2122110 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:35:03.600823 2122110 machine.go:97] duration metric: took 4.210414919s to provisionDockerMachine
	I1222 01:35:03.600835 2122110 start.go:293] postStartSetup for "embed-certs-235567" (driver="docker")
	I1222 01:35:03.600853 2122110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:35:03.600942 2122110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:35:03.600983 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.618930 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.717261 2122110 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:35:03.720571 2122110 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:35:03.720598 2122110 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:35:03.720610 2122110 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:35:03.720665 2122110 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:35:03.720746 2122110 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:35:03.720845 2122110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:35:03.728230 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:35:03.745249 2122110 start.go:296] duration metric: took 144.39224ms for postStartSetup
	I1222 01:35:03.745353 2122110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:35:03.745414 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.762587 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.854394 2122110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:35:03.858729 2122110 fix.go:56] duration metric: took 4.768570013s for fixHost
	I1222 01:35:03.858757 2122110 start.go:83] releasing machines lock for "embed-certs-235567", held for 4.768626076s
	I1222 01:35:03.858823 2122110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:35:03.875377 2122110 ssh_runner.go:195] Run: cat /version.json
	I1222 01:35:03.875442 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.875715 2122110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:35:03.875780 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.897926 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.897806 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.993420 2122110 ssh_runner.go:195] Run: systemctl --version
	I1222 01:35:04.095279 2122110 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:35:04.147933 2122110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:35:04.153645 2122110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:35:04.153723 2122110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:35:04.163549 2122110 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:35:04.163579 2122110 start.go:496] detecting cgroup driver to use...
	I1222 01:35:04.163641 2122110 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:35:04.163730 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:35:04.180036 2122110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:35:04.193252 2122110 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:35:04.193378 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:35:04.209052 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:35:04.222359 2122110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:35:04.331684 2122110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:35:04.448814 2122110 docker.go:234] disabling docker service ...
	I1222 01:35:04.448916 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:35:04.463696 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:35:04.476528 2122110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:35:04.592283 2122110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:35:04.701484 2122110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:35:04.714505 2122110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:35:04.728852 2122110 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:35:04.728969 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.738396 2122110 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:35:04.738528 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.747110 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.755493 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.764049 2122110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:35:04.771587 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.779821 2122110 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.788101 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.796579 2122110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:35:04.803799 2122110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:35:04.811312 2122110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:35:04.950270 2122110 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:35:05.141253 2122110 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:35:05.141390 2122110 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:35:05.145755 2122110 start.go:564] Will wait 60s for crictl version
	I1222 01:35:05.145875 2122110 ssh_runner.go:195] Run: which crictl
	I1222 01:35:05.150650 2122110 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:35:05.181935 2122110 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:35:05.182072 2122110 ssh_runner.go:195] Run: crio --version
	I1222 01:35:05.212416 2122110 ssh_runner.go:195] Run: crio --version
	I1222 01:35:05.244609 2122110 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:35:05.247581 2122110 cli_runner.go:164] Run: docker network inspect embed-certs-235567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:35:05.264046 2122110 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:35:05.268223 2122110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:35:05.277849 2122110 kubeadm.go:884] updating cluster {Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:35:05.277993 2122110 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:35:05.278055 2122110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:35:05.317228 2122110 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:35:05.317250 2122110 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:35:05.317311 2122110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:35:05.344042 2122110 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:35:05.344064 2122110 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:35:05.344072 2122110 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1222 01:35:05.344180 2122110 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-235567 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:35:05.344272 2122110 ssh_runner.go:195] Run: crio config
	I1222 01:35:05.399295 2122110 cni.go:84] Creating CNI manager for ""
	I1222 01:35:05.399362 2122110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:35:05.399399 2122110 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:35:05.399442 2122110 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-235567 NodeName:embed-certs-235567 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:35:05.399644 2122110 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-235567"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:35:05.399747 2122110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:35:05.407595 2122110 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:35:05.407700 2122110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:35:05.415377 2122110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1222 01:35:05.428302 2122110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:35:05.441419 2122110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1222 01:35:05.455267 2122110 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:35:05.458938 2122110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:35:05.468826 2122110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:35:05.587395 2122110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:35:05.607524 2122110 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567 for IP: 192.168.85.2
	I1222 01:35:05.607585 2122110 certs.go:195] generating shared ca certs ...
	I1222 01:35:05.607623 2122110 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:35:05.607807 2122110 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:35:05.607901 2122110 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:35:05.607935 2122110 certs.go:257] generating profile certs ...
	I1222 01:35:05.608059 2122110 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/client.key
	I1222 01:35:05.608192 2122110 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key.859ed20e
	I1222 01:35:05.608290 2122110 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.key
	I1222 01:35:05.608465 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:35:05.608537 2122110 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:35:05.608564 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:35:05.608621 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:35:05.608673 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:35:05.608731 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:35:05.608819 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:35:05.609671 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:35:05.642650 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:35:05.669787 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:35:05.692920 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:35:05.711850 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1222 01:35:05.730906 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:35:05.752495 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:35:05.772928 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:35:05.803929 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:35:05.829710 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:35:05.848244 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:35:05.868983 2122110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:35:05.884828 2122110 ssh_runner.go:195] Run: openssl version
	I1222 01:35:05.891618 2122110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:35:05.899316 2122110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:35:05.907357 2122110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:35:05.911407 2122110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:35:05.911501 2122110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:35:05.953256 2122110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:35:05.961240 2122110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:35:05.969117 2122110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:35:05.977998 2122110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:35:05.982126 2122110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:35:05.982190 2122110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:35:06.023600 2122110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:35:06.031793 2122110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:35:06.040077 2122110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:35:06.048276 2122110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:35:06.052272 2122110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:35:06.052345 2122110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:35:06.094729 2122110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:35:06.102685 2122110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:35:06.107175 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:35:06.149064 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:35:06.190475 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:35:06.231759 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:35:06.276662 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:35:06.324325 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:35:06.416704 2122110 kubeadm.go:401] StartCluster: {Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:35:06.416790 2122110 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:35:06.416857 2122110 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:35:06.444910 2122110 cri.go:96] found id: ""
	I1222 01:35:06.444982 2122110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:35:06.454000 2122110 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:35:06.454022 2122110 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:35:06.454089 2122110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:35:06.466514 2122110 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:35:06.466900 2122110 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-235567" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:35:06.467000 2122110 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-235567" cluster setting kubeconfig missing "embed-certs-235567" context setting]
	I1222 01:35:06.467277 2122110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:35:06.468949 2122110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:35:06.480891 2122110 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:35:06.480929 2122110 kubeadm.go:602] duration metric: took 26.895072ms to restartPrimaryControlPlane
	I1222 01:35:06.480939 2122110 kubeadm.go:403] duration metric: took 64.244425ms to StartCluster
	I1222 01:35:06.480955 2122110 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:35:06.481021 2122110 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:35:06.481996 2122110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:35:06.482217 2122110 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:35:06.482568 2122110 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:35:06.482644 2122110 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:35:06.482784 2122110 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-235567"
	I1222 01:35:06.482804 2122110 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-235567"
	W1222 01:35:06.482810 2122110 addons.go:248] addon storage-provisioner should already be in state true
	I1222 01:35:06.482838 2122110 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:35:06.482882 2122110 addons.go:70] Setting dashboard=true in profile "embed-certs-235567"
	I1222 01:35:06.482935 2122110 addons.go:239] Setting addon dashboard=true in "embed-certs-235567"
	W1222 01:35:06.482961 2122110 addons.go:248] addon dashboard should already be in state true
	I1222 01:35:06.483012 2122110 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:35:06.483334 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:35:06.483582 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:35:06.484640 2122110 addons.go:70] Setting default-storageclass=true in profile "embed-certs-235567"
	I1222 01:35:06.484668 2122110 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-235567"
	I1222 01:35:06.484954 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:35:06.494499 2122110 out.go:179] * Verifying Kubernetes components...
	I1222 01:35:06.501860 2122110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:35:06.538191 2122110 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:35:06.544989 2122110 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:35:06.548157 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:35:06.548181 2122110 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:35:06.548266 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:06.550087 2122110 addons.go:239] Setting addon default-storageclass=true in "embed-certs-235567"
	W1222 01:35:06.550109 2122110 addons.go:248] addon default-storageclass should already be in state true
	I1222 01:35:06.550132 2122110 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:35:06.550545 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:35:06.556961 2122110 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:35:06.563595 2122110 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:35:06.563625 2122110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:35:06.563692 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:06.587545 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:06.600160 2122110 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:35:06.600181 2122110 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:35:06.600240 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:06.618031 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:06.643258 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:06.892847 2122110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:35:06.923934 2122110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:35:06.929220 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:35:06.929293 2122110 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:35:06.942315 2122110 node_ready.go:35] waiting up to 6m0s for node "embed-certs-235567" to be "Ready" ...
	I1222 01:35:06.956434 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:35:06.956462 2122110 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:35:06.982713 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:35:06.982739 2122110 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:35:07.031344 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:35:07.031364 2122110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:35:07.046807 2122110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:35:07.131041 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:35:07.131107 2122110 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:35:07.199554 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:35:07.199618 2122110 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:35:07.258579 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:35:07.258644 2122110 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:35:07.281974 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:35:07.282039 2122110 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:35:07.308281 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:35:07.308350 2122110 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:35:07.335096 2122110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:35:11.201573 2122110 node_ready.go:49] node "embed-certs-235567" is "Ready"
	I1222 01:35:11.201607 2122110 node_ready.go:38] duration metric: took 4.259190645s for node "embed-certs-235567" to be "Ready" ...
	I1222 01:35:11.201622 2122110 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:35:11.201686 2122110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:35:12.068685 2122110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.144673816s)
	I1222 01:35:12.068799 2122110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.021972614s)
	I1222 01:35:12.439964 2122110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.104785011s)
	I1222 01:35:12.440139 2122110 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.238436959s)
	I1222 01:35:12.440176 2122110 api_server.go:72] duration metric: took 5.957927974s to wait for apiserver process to appear ...
	I1222 01:35:12.440189 2122110 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:35:12.440206 2122110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 01:35:12.443208 2122110 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-235567 addons enable metrics-server
	
	I1222 01:35:12.446535 2122110 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1222 01:35:12.449546 2122110 addons.go:530] duration metric: took 5.966882312s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1222 01:35:12.463133 2122110 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:35:12.463163 2122110 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:35:12.940338 2122110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 01:35:12.950524 2122110 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1222 01:35:12.951606 2122110 api_server.go:141] control plane version: v1.34.3
	I1222 01:35:12.951630 2122110 api_server.go:131] duration metric: took 511.433692ms to wait for apiserver health ...
	I1222 01:35:12.951640 2122110 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:35:12.959263 2122110 system_pods.go:59] 8 kube-system pods found
	I1222 01:35:12.959305 2122110 system_pods.go:61] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:35:12.959314 2122110 system_pods.go:61] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:35:12.959319 2122110 system_pods.go:61] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:35:12.959326 2122110 system_pods.go:61] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:35:12.959336 2122110 system_pods.go:61] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:35:12.959351 2122110 system_pods.go:61] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1222 01:35:12.959359 2122110 system_pods.go:61] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:35:12.959368 2122110 system_pods.go:61] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:35:12.959376 2122110 system_pods.go:74] duration metric: took 7.72999ms to wait for pod list to return data ...
	I1222 01:35:12.959412 2122110 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:35:12.963844 2122110 default_sa.go:45] found service account: "default"
	I1222 01:35:12.963870 2122110 default_sa.go:55] duration metric: took 4.451312ms for default service account to be created ...
	I1222 01:35:12.963880 2122110 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:35:12.987714 2122110 system_pods.go:86] 8 kube-system pods found
	I1222 01:35:12.987750 2122110 system_pods.go:89] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:35:12.987760 2122110 system_pods.go:89] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:35:12.987765 2122110 system_pods.go:89] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:35:12.987772 2122110 system_pods.go:89] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:35:12.987779 2122110 system_pods.go:89] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:35:12.987791 2122110 system_pods.go:89] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1222 01:35:12.987797 2122110 system_pods.go:89] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:35:12.987813 2122110 system_pods.go:89] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Running
	I1222 01:35:12.987821 2122110 system_pods.go:126] duration metric: took 23.935703ms to wait for k8s-apps to be running ...
	I1222 01:35:12.987829 2122110 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:35:12.987882 2122110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:35:13.004206 2122110 system_svc.go:56] duration metric: took 16.366545ms WaitForService to wait for kubelet
	I1222 01:35:13.004239 2122110 kubeadm.go:587] duration metric: took 6.52199004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:35:13.004257 2122110 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:35:13.031362 2122110 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:35:13.031399 2122110 node_conditions.go:123] node cpu capacity is 2
	I1222 01:35:13.031413 2122110 node_conditions.go:105] duration metric: took 27.140784ms to run NodePressure ...
	I1222 01:35:13.031428 2122110 start.go:242] waiting for startup goroutines ...
	I1222 01:35:13.031435 2122110 start.go:247] waiting for cluster config update ...
	I1222 01:35:13.031447 2122110 start.go:256] writing updated cluster config ...
	I1222 01:35:13.031715 2122110 ssh_runner.go:195] Run: rm -f paused
	I1222 01:35:13.035475 2122110 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:35:13.058769 2122110 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q5hbz" in "kube-system" namespace to be "Ready" or be gone ...
	W1222 01:35:15.065103 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:17.563941 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:20.065806 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:22.564566 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:25.064761 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:27.563809 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:29.564262 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:31.564914 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:34.065251 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:36.564641 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:39.064728 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:41.564008 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:44.064896 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:46.563679 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	I1222 01:35:47.564191 2122110 pod_ready.go:94] pod "coredns-66bc5c9577-q5hbz" is "Ready"
	I1222 01:35:47.564216 2122110 pod_ready.go:86] duration metric: took 34.505417794s for pod "coredns-66bc5c9577-q5hbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.566818 2122110 pod_ready.go:83] waiting for pod "etcd-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.571078 2122110 pod_ready.go:94] pod "etcd-embed-certs-235567" is "Ready"
	I1222 01:35:47.571107 2122110 pod_ready.go:86] duration metric: took 4.263674ms for pod "etcd-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.572974 2122110 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.576900 2122110 pod_ready.go:94] pod "kube-apiserver-embed-certs-235567" is "Ready"
	I1222 01:35:47.576926 2122110 pod_ready.go:86] duration metric: took 3.890935ms for pod "kube-apiserver-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.579001 2122110 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.761933 2122110 pod_ready.go:94] pod "kube-controller-manager-embed-certs-235567" is "Ready"
	I1222 01:35:47.761965 2122110 pod_ready.go:86] duration metric: took 182.939046ms for pod "kube-controller-manager-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.963187 2122110 pod_ready.go:83] waiting for pod "kube-proxy-shsn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:48.363058 2122110 pod_ready.go:94] pod "kube-proxy-shsn4" is "Ready"
	I1222 01:35:48.363134 2122110 pod_ready.go:86] duration metric: took 399.919738ms for pod "kube-proxy-shsn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:48.561952 2122110 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:48.962307 2122110 pod_ready.go:94] pod "kube-scheduler-embed-certs-235567" is "Ready"
	I1222 01:35:48.962336 2122110 pod_ready.go:86] duration metric: took 400.360274ms for pod "kube-scheduler-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:48.962350 2122110 pod_ready.go:40] duration metric: took 35.926841966s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:35:49.020509 2122110 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:35:49.023602 2122110 out.go:179] * Done! kubectl is now configured to use "embed-certs-235567" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.626639345Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.633237406Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.633273441Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.633295816Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.636705272Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.636740873Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.636765037Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.639998498Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.640035855Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.640061429Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.643293906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.64332812Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.848850848Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8c710d9c-8163-456f-b3e9-c4a0fc7899b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.850435505Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ecf8282f-3553-46b4-8bd6-7af5fdca2709 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.851740213Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662/dashboard-metrics-scraper" id=9d8eb6c3-2edb-4db6-988d-50c5bb2677e8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.851831049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.85845804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.859109311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.891601154Z" level=info msg="Created container 09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662/dashboard-metrics-scraper" id=9d8eb6c3-2edb-4db6-988d-50c5bb2677e8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.89667364Z" level=info msg="Starting container: 09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207" id=1bf632f6-cf6d-4522-975a-0bc06544d6b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.899356973Z" level=info msg="Started container" PID=1703 containerID=09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662/dashboard-metrics-scraper id=1bf632f6-cf6d-4522-975a-0bc06544d6b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35c093b75969285df0de874b77dec9dd8dfaaa7d7e4b6b9edc565458b6f1817b
	Dec 22 01:35:58 embed-certs-235567 conmon[1701]: conmon 09e4ee6f88de2e1d5aec <ninfo>: container 1703 exited with status 1
	Dec 22 01:35:59 embed-certs-235567 crio[649]: time="2025-12-22T01:35:59.069193522Z" level=info msg="Removing container: 724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8" id=2f8984c6-3796-4958-a581-4107f4dbf506 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:35:59 embed-certs-235567 crio[649]: time="2025-12-22T01:35:59.08085612Z" level=info msg="Error loading conmon cgroup of container 724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8: cgroup deleted" id=2f8984c6-3796-4958-a581-4107f4dbf506 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:35:59 embed-certs-235567 crio[649]: time="2025-12-22T01:35:59.086944986Z" level=info msg="Removed container 724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662/dashboard-metrics-scraper" id=2f8984c6-3796-4958-a581-4107f4dbf506 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	09e4ee6f88de2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   35c093b759692       dashboard-metrics-scraper-6ffb444bf9-vx662   kubernetes-dashboard
	d61c20e397c03       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   6d879ff229568       storage-provisioner                          kube-system
	611a97bfc2a29       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   b2493cfbf7c23       kubernetes-dashboard-855c9754f9-5424p        kubernetes-dashboard
	65b574438a73e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   a8dcb3a4702f2       coredns-66bc5c9577-q5hbz                     kube-system
	43dcbf79e9d42       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   6d879ff229568       storage-provisioner                          kube-system
	6e7e75c0138f0       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           51 seconds ago      Running             kindnet-cni                 1                   d69429cea04c1       kindnet-mkqcr                                kube-system
	85ab7fa5b1f3c       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   68a401c072f7b       busybox                                      default
	61111522e0f72       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                           51 seconds ago      Running             kube-proxy                  1                   6c753d50963ac       kube-proxy-shsn4                             kube-system
	241d1760e627a       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                           57 seconds ago      Running             kube-controller-manager     1                   abbf9f970d312       kube-controller-manager-embed-certs-235567   kube-system
	21f70a0897f78       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           57 seconds ago      Running             etcd                        1                   6e5820b99a046       etcd-embed-certs-235567                      kube-system
	1b831e8cc8f3d       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                           57 seconds ago      Running             kube-apiserver              1                   32f555e4b9d75       kube-apiserver-embed-certs-235567            kube-system
	bef725e3057a7       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                           57 seconds ago      Running             kube-scheduler              1                   709ba04fcc382       kube-scheduler-embed-certs-235567            kube-system
	
	
	==> coredns [65b574438a73e2b2eefd38c4e7e07643ee4f34520f64175c82322af50cf08c67] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32852 - 44244 "HINFO IN 7198693271870947441.562273083697011167. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004445338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-235567
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-235567
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=embed-certs-235567
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_34_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-235567
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:35:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:35:42 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:35:42 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:35:42 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:35:42 +0000   Mon, 22 Dec 2025 01:34:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-235567
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                d4644b9d-9d43-4a69-8d1b-d5cf0da8245d
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-q5hbz                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-embed-certs-235567                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         111s
	  kube-system                 kindnet-mkqcr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-235567             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-235567    200m (10%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-shsn4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-235567             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vx662    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5424p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 104s               kube-proxy       
	  Normal   Starting                 51s                kube-proxy       
	  Normal   Starting                 111s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 111s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     110s               kubelet          Node embed-certs-235567 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-235567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  110s               kubelet          Node embed-certs-235567 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           106s               node-controller  Node embed-certs-235567 event: Registered Node embed-certs-235567 in Controller
	  Normal   NodeReady                92s                kubelet          Node embed-certs-235567 status is now: NodeReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-235567 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-235567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-235567 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                node-controller  Node embed-certs-235567 event: Registered Node embed-certs-235567 in Controller
	
	
	==> dmesg <==
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [21f70a0897f7814aab2cb5d3f2434693fd5a4d6832c30cb70742dc4fa69e6e5d] <==
	{"level":"warn","ts":"2025-12-22T01:35:09.681123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.698183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.718388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.731437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.751322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.767283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.795903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.827836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.846559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.893196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.908166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.943131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.950370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.969730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.993441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.017510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.027943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.045756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.070120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.083233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.122299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.158421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.198056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.222166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.287132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55968","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:36:03 up 1 day,  8:18,  0 user,  load average: 2.01, 2.73, 2.41
	Linux embed-certs-235567 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e7e75c0138f0dafd4e46397e222ce458e1253549363ec3b3c0b35b0eb5a3e22] <==
	I1222 01:35:12.467896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:35:12.477545       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:35:12.477782       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:35:12.477865       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:35:12.477921       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:35:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:35:12.625122       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:35:12.625147       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:35:12.625156       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:35:12.625886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1222 01:35:42.625473       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1222 01:35:42.625659       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1222 01:35:42.625745       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1222 01:35:42.625865       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1222 01:35:44.025608       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:35:44.025724       1 metrics.go:72] Registering metrics
	I1222 01:35:44.025819       1 controller.go:711] "Syncing nftables rules"
	I1222 01:35:52.625630       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:35:52.625685       1 main.go:301] handling current node
	I1222 01:36:02.625695       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:36:02.625728       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1b831e8cc8f3d0a0cb574d1962397875204d05857734e2b9f7aae161ea35fe4e] <==
	I1222 01:35:11.256232       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1222 01:35:11.256633       1 aggregator.go:171] initial CRD sync complete...
	I1222 01:35:11.256682       1 autoregister_controller.go:144] Starting autoregister controller
	I1222 01:35:11.256709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1222 01:35:11.256737       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:35:11.259996       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1222 01:35:11.294633       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1222 01:35:11.299070       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1222 01:35:11.302740       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1222 01:35:11.312313       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1222 01:35:11.312344       1 policy_source.go:240] refreshing policies
	I1222 01:35:11.313542       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1222 01:35:11.313601       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1222 01:35:11.316687       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:35:11.801308       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1222 01:35:11.805760       1 controller.go:667] quota admission added evaluator for: namespaces
	I1222 01:35:11.867798       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:35:11.951359       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1222 01:35:12.021039       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:35:12.060691       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:35:12.327366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.143.12"}
	I1222 01:35:12.416318       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.68.183"}
	I1222 01:35:14.840164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1222 01:35:15.042933       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1222 01:35:15.090054       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [241d1760e627abc7a3d344ac05e9a57f9c7451cb29646bdefe78bfbc841f5c11] <==
	I1222 01:35:14.536271       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1222 01:35:14.538587       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1222 01:35:14.538701       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1222 01:35:14.538856       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:35:14.542060       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1222 01:35:14.542258       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1222 01:35:14.542423       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1222 01:35:14.542486       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1222 01:35:14.542516       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1222 01:35:14.542544       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1222 01:35:14.545189       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1222 01:35:14.547747       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1222 01:35:14.553364       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1222 01:35:14.555585       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1222 01:35:14.559871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:35:14.561556       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1222 01:35:14.564818       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1222 01:35:14.567043       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:35:14.569101       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:35:14.571748       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1222 01:35:14.583678       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1222 01:35:14.583790       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1222 01:35:14.583683       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1222 01:35:14.583724       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1222 01:35:14.583704       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [61111522e0f72b777c735bc37c123c749993783fad4a5178055cf076026c6073] <==
	I1222 01:35:12.462740       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:35:12.539512       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:35:12.639845       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:35:12.639943       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1222 01:35:12.640065       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:35:12.736174       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:35:12.736220       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:35:12.739949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:35:12.740253       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:35:12.740279       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:35:12.743570       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:35:12.743649       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:35:12.743960       1 config.go:200] "Starting service config controller"
	I1222 01:35:12.744004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:35:12.746035       1 config.go:309] "Starting node config controller"
	I1222 01:35:12.746053       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:35:12.746060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:35:12.746356       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:35:12.746374       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:35:12.844705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:35:12.844771       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1222 01:35:12.846702       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bef725e3057a75230c765ab506197a6b277dd374b7da1670267592ddadf181e3] <==
	I1222 01:35:09.735114       1 serving.go:386] Generated self-signed cert in-memory
	W1222 01:35:11.137608       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1222 01:35:11.137646       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1222 01:35:11.137657       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1222 01:35:11.137664       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1222 01:35:11.240680       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1222 01:35:11.240708       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:35:11.242895       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:35:11.242923       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:35:11.244932       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1222 01:35:11.245002       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 01:35:11.343623       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 22 01:35:15 embed-certs-235567 kubelet[776]: W1222 01:35:15.549935     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/crio-b2493cfbf7c23b9af773669df392217b8f3bbc797198470a78dda26f5526a6b8 WatchSource:0}: Error finding container b2493cfbf7c23b9af773669df392217b8f3bbc797198470a78dda26f5526a6b8: Status 404 returned error can't find the container with id b2493cfbf7c23b9af773669df392217b8f3bbc797198470a78dda26f5526a6b8
	Dec 22 01:35:17 embed-certs-235567 kubelet[776]: I1222 01:35:17.107459     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 22 01:35:19 embed-certs-235567 kubelet[776]: I1222 01:35:19.950154     776 scope.go:117] "RemoveContainer" containerID="00f4c49cc4c5ce843371f4d342bd1514aa548a106dd97e9207a61e6d72c02e8e"
	Dec 22 01:35:20 embed-certs-235567 kubelet[776]: I1222 01:35:20.955532     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:20 embed-certs-235567 kubelet[776]: E1222 01:35:20.955686     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:20 embed-certs-235567 kubelet[776]: I1222 01:35:20.957900     776 scope.go:117] "RemoveContainer" containerID="00f4c49cc4c5ce843371f4d342bd1514aa548a106dd97e9207a61e6d72c02e8e"
	Dec 22 01:35:21 embed-certs-235567 kubelet[776]: I1222 01:35:21.959815     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:21 embed-certs-235567 kubelet[776]: E1222 01:35:21.959945     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:24 embed-certs-235567 kubelet[776]: I1222 01:35:24.699904     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:24 embed-certs-235567 kubelet[776]: E1222 01:35:24.700093     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:25 embed-certs-235567 kubelet[776]: I1222 01:35:25.558402     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5424p" podStartSLOduration=2.188318763 podStartE2EDuration="10.558385584s" podCreationTimestamp="2025-12-22 01:35:15 +0000 UTC" firstStartedPulling="2025-12-22 01:35:15.553070713 +0000 UTC m=+9.947349182" lastFinishedPulling="2025-12-22 01:35:23.923137534 +0000 UTC m=+18.317416003" observedRunningTime="2025-12-22 01:35:24.978631265 +0000 UTC m=+19.372909766" watchObservedRunningTime="2025-12-22 01:35:25.558385584 +0000 UTC m=+19.952664053"
	Dec 22 01:35:35 embed-certs-235567 kubelet[776]: I1222 01:35:35.848461     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:35 embed-certs-235567 kubelet[776]: I1222 01:35:35.993175     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:35 embed-certs-235567 kubelet[776]: I1222 01:35:35.994170     776 scope.go:117] "RemoveContainer" containerID="724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8"
	Dec 22 01:35:35 embed-certs-235567 kubelet[776]: E1222 01:35:35.996244     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:43 embed-certs-235567 kubelet[776]: I1222 01:35:43.025381     776 scope.go:117] "RemoveContainer" containerID="43dcbf79e9d4272e9576e394ab66d96aa9a891411c988a1f8f0c7ddaf6aaed7c"
	Dec 22 01:35:44 embed-certs-235567 kubelet[776]: I1222 01:35:44.700436     776 scope.go:117] "RemoveContainer" containerID="724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8"
	Dec 22 01:35:44 embed-certs-235567 kubelet[776]: E1222 01:35:44.701077     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:58 embed-certs-235567 kubelet[776]: I1222 01:35:58.848251     776 scope.go:117] "RemoveContainer" containerID="724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8"
	Dec 22 01:35:59 embed-certs-235567 kubelet[776]: I1222 01:35:59.067367     776 scope.go:117] "RemoveContainer" containerID="724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8"
	Dec 22 01:35:59 embed-certs-235567 kubelet[776]: I1222 01:35:59.067587     776 scope.go:117] "RemoveContainer" containerID="09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207"
	Dec 22 01:35:59 embed-certs-235567 kubelet[776]: E1222 01:35:59.068048     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:36:01 embed-certs-235567 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 22 01:36:01 embed-certs-235567 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 22 01:36:01 embed-certs-235567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [611a97bfc2a29f0e10c0196b1d3eba98b24c290267f58aab6f754d2c388f96e7] <==
	2025/12/22 01:35:23 Using namespace: kubernetes-dashboard
	2025/12/22 01:35:23 Using in-cluster config to connect to apiserver
	2025/12/22 01:35:23 Using secret token for csrf signing
	2025/12/22 01:35:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/22 01:35:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/22 01:35:24 Successful initial request to the apiserver, version: v1.34.3
	2025/12/22 01:35:24 Generating JWE encryption key
	2025/12/22 01:35:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/22 01:35:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/22 01:35:24 Initializing JWE encryption key from synchronized object
	2025/12/22 01:35:24 Creating in-cluster Sidecar client
	2025/12/22 01:35:24 Serving insecurely on HTTP port: 9090
	2025/12/22 01:35:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:35:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:35:23 Starting overwatch
	
	
	==> storage-provisioner [43dcbf79e9d4272e9576e394ab66d96aa9a891411c988a1f8f0c7ddaf6aaed7c] <==
	I1222 01:35:12.370964       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1222 01:35:42.372662       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d61c20e397c0328b0fd6e7bd1535b2a978028501bf444674fce6879659e6ae3b] <==
	I1222 01:35:43.079844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:35:43.094693       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:35:43.095245       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1222 01:35:43.099421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:35:46.554942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:35:50.814629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:35:54.412819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:35:57.466325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:00.491820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:00.505301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:36:00.505888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:36:00.506276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-235567_f519d8a3-c794-41b5-98b7-77d342de02ae!
	I1222 01:36:00.506490       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d572cc3c-b104-421c-a93c-f4c2c45f34e5", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-235567_f519d8a3-c794-41b5-98b7-77d342de02ae became leader
	W1222 01:36:00.516435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:00.521314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:36:00.609909       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-235567_f519d8a3-c794-41b5-98b7-77d342de02ae!
	W1222 01:36:02.531501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:02.536774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-235567 -n embed-certs-235567
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-235567 -n embed-certs-235567: exit status 2 (358.536251ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-235567 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-235567
helpers_test.go:244: (dbg) docker inspect embed-certs-235567:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6",
	        "Created": "2025-12-22T01:33:47.830445272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2122242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:34:59.143603171Z",
	            "FinishedAt": "2025-12-22T01:34:58.358096418Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/hosts",
	        "LogPath": "/var/lib/docker/containers/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6-json.log",
	        "Name": "/embed-certs-235567",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-235567:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-235567",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6",
	                "LowerDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21e819f716cb71a9fbb88a00aeded38d8e5ff3ee7efce94c33b902a682b6aab8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-235567",
	                "Source": "/var/lib/docker/volumes/embed-certs-235567/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-235567",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-235567",
	                "name.minikube.sigs.k8s.io": "embed-certs-235567",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bb2bd55148d0f6c10b9090d8e42bdf1f6940199965647043cf16695bcf80046",
	            "SandboxKey": "/var/run/docker/netns/0bb2bd55148d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38761"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38762"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38765"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38764"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-235567": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:e6:85:80:63:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6f3306e4c44399ff80f026f59f3fa2f6e6e98178df3c00a9bdf130d8ed5d4e77",
	                    "EndpointID": "80d7014065290cc2a301adcf0bb90b6a5972ba089b1a6861ca38dfdab1e105b6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-235567",
	                        "4081720ccc5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-235567 -n embed-certs-235567
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-235567 -n embed-certs-235567: exit status 2 (350.387787ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-235567 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-235567 logs -n 25: (1.265235137s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ force-systemd-flag-314223 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p force-systemd-flag-314223                                                                                                                                                                                                                  │ force-systemd-flag-314223 │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ start   │ -p cert-options-433911 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ cert-options-433911 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ ssh     │ -p cert-options-433911 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:30 UTC │
	│ delete  │ -p cert-options-433911                                                                                                                                                                                                                        │ cert-options-433911       │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:31 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:31 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ stop    │ -p old-k8s-version-535197 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-535197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ delete  │ -p cert-expiration-031738                                                                                                                                                                                                                     │ cert-expiration-031738    │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-491054         │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                               │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197    │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                   │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-235567        │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:34:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:34:58.877432 2122110 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:34:58.877541 2122110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:34:58.877551 2122110 out.go:374] Setting ErrFile to fd 2...
	I1222 01:34:58.877556 2122110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:34:58.877991 2122110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:34:58.878443 2122110 out.go:368] Setting JSON to false
	I1222 01:34:58.879377 2122110 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116248,"bootTime":1766251051,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:34:58.879466 2122110 start.go:143] virtualization:  
	I1222 01:34:58.882418 2122110 out.go:179] * [embed-certs-235567] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:34:58.884766 2122110 notify.go:221] Checking for updates...
	I1222 01:34:58.885259 2122110 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:34:58.888148 2122110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:34:58.891081 2122110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:34:58.893978 2122110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:34:58.896876 2122110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:34:58.899990 2122110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:34:58.903465 2122110 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:34:58.904062 2122110 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:34:58.935161 2122110 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:34:58.935323 2122110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:34:58.994461 2122110 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:34:58.98529114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:34:58.994567 2122110 docker.go:319] overlay module found
	I1222 01:34:58.997712 2122110 out.go:179] * Using the docker driver based on existing profile
	I1222 01:34:59.001556 2122110 start.go:309] selected driver: docker
	I1222 01:34:59.001592 2122110 start.go:928] validating driver "docker" against &{Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:34:59.001709 2122110 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:34:59.002608 2122110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:34:59.058353 2122110 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:34:59.049155876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:34:59.058659 2122110 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:34:59.058687 2122110 cni.go:84] Creating CNI manager for ""
	I1222 01:34:59.058742 2122110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:34:59.058784 2122110 start.go:353] cluster config:
	{Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:34:59.062013 2122110 out.go:179] * Starting "embed-certs-235567" primary control-plane node in "embed-certs-235567" cluster
	I1222 01:34:59.064896 2122110 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:34:59.067876 2122110 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:34:59.070716 2122110 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:34:59.070760 2122110 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:34:59.070769 2122110 cache.go:65] Caching tarball of preloaded images
	I1222 01:34:59.070811 2122110 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:34:59.070849 2122110 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:34:59.070858 2122110 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:34:59.070975 2122110 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/config.json ...
	I1222 01:34:59.089986 2122110 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:34:59.090008 2122110 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:34:59.090027 2122110 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:34:59.090055 2122110 start.go:360] acquireMachinesLock for embed-certs-235567: {Name:mk48f2965e632f2a10179742c54c3729e95c3cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:34:59.090119 2122110 start.go:364] duration metric: took 38.103µs to acquireMachinesLock for "embed-certs-235567"
	I1222 01:34:59.090143 2122110 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:34:59.090152 2122110 fix.go:54] fixHost starting: 
	I1222 01:34:59.090415 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:34:59.106986 2122110 fix.go:112] recreateIfNeeded on embed-certs-235567: state=Stopped err=<nil>
	W1222 01:34:59.107026 2122110 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:34:59.110325 2122110 out.go:252] * Restarting existing docker container for "embed-certs-235567" ...
	I1222 01:34:59.110405 2122110 cli_runner.go:164] Run: docker start embed-certs-235567
	I1222 01:34:59.342708 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:34:59.364901 2122110 kic.go:430] container "embed-certs-235567" state is running.
	I1222 01:34:59.366477 2122110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:34:59.390010 2122110 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/config.json ...
	I1222 01:34:59.390389 2122110 machine.go:94] provisionDockerMachine start ...
	I1222 01:34:59.390469 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:34:59.414254 2122110 main.go:144] libmachine: Using SSH client type: native
	I1222 01:34:59.414584 2122110 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38761 <nil> <nil>}
	I1222 01:34:59.414594 2122110 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:34:59.415518 2122110 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:35:02.549121 2122110 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-235567
	
	I1222 01:35:02.549147 2122110 ubuntu.go:182] provisioning hostname "embed-certs-235567"
	I1222 01:35:02.549226 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:02.567396 2122110 main.go:144] libmachine: Using SSH client type: native
	I1222 01:35:02.567709 2122110 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38761 <nil> <nil>}
	I1222 01:35:02.567723 2122110 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-235567 && echo "embed-certs-235567" | sudo tee /etc/hostname
	I1222 01:35:02.724449 2122110 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-235567
	
	I1222 01:35:02.724574 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:02.743093 2122110 main.go:144] libmachine: Using SSH client type: native
	I1222 01:35:02.743400 2122110 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38761 <nil> <nil>}
	I1222 01:35:02.743430 2122110 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-235567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-235567/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-235567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:35:02.877962 2122110 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:35:02.877985 2122110 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:35:02.878008 2122110 ubuntu.go:190] setting up certificates
	I1222 01:35:02.878024 2122110 provision.go:84] configureAuth start
	I1222 01:35:02.878092 2122110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:35:02.897155 2122110 provision.go:143] copyHostCerts
	I1222 01:35:02.897225 2122110 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:35:02.897243 2122110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:35:02.897492 2122110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:35:02.897622 2122110 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:35:02.897636 2122110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:35:02.897671 2122110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:35:02.897746 2122110 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:35:02.897755 2122110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:35:02.897781 2122110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:35:02.897843 2122110 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.embed-certs-235567 san=[127.0.0.1 192.168.85.2 embed-certs-235567 localhost minikube]
	I1222 01:35:03.040349 2122110 provision.go:177] copyRemoteCerts
	I1222 01:35:03.040429 2122110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:35:03.040471 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.057545 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.153411 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1222 01:35:03.171075 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:35:03.188657 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:35:03.206086 2122110 provision.go:87] duration metric: took 328.037791ms to configureAuth
	I1222 01:35:03.206121 2122110 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:35:03.206324 2122110 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:35:03.206430 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.224365 2122110 main.go:144] libmachine: Using SSH client type: native
	I1222 01:35:03.224680 2122110 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38761 <nil> <nil>}
	I1222 01:35:03.224694 2122110 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:35:03.600797 2122110 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:35:03.600823 2122110 machine.go:97] duration metric: took 4.210414919s to provisionDockerMachine
	I1222 01:35:03.600835 2122110 start.go:293] postStartSetup for "embed-certs-235567" (driver="docker")
	I1222 01:35:03.600853 2122110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:35:03.600942 2122110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:35:03.600983 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.618930 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.717261 2122110 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:35:03.720571 2122110 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:35:03.720598 2122110 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:35:03.720610 2122110 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:35:03.720665 2122110 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:35:03.720746 2122110 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:35:03.720845 2122110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:35:03.728230 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:35:03.745249 2122110 start.go:296] duration metric: took 144.39224ms for postStartSetup
	I1222 01:35:03.745353 2122110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:35:03.745414 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.762587 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.854394 2122110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:35:03.858729 2122110 fix.go:56] duration metric: took 4.768570013s for fixHost
	I1222 01:35:03.858757 2122110 start.go:83] releasing machines lock for "embed-certs-235567", held for 4.768626076s
	I1222 01:35:03.858823 2122110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-235567
	I1222 01:35:03.875377 2122110 ssh_runner.go:195] Run: cat /version.json
	I1222 01:35:03.875442 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.875715 2122110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:35:03.875780 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:03.897926 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.897806 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:03.993420 2122110 ssh_runner.go:195] Run: systemctl --version
	I1222 01:35:04.095279 2122110 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:35:04.147933 2122110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:35:04.153645 2122110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:35:04.153723 2122110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:35:04.163549 2122110 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:35:04.163579 2122110 start.go:496] detecting cgroup driver to use...
	I1222 01:35:04.163641 2122110 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:35:04.163730 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:35:04.180036 2122110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:35:04.193252 2122110 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:35:04.193378 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:35:04.209052 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:35:04.222359 2122110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:35:04.331684 2122110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:35:04.448814 2122110 docker.go:234] disabling docker service ...
	I1222 01:35:04.448916 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:35:04.463696 2122110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:35:04.476528 2122110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:35:04.592283 2122110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:35:04.701484 2122110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:35:04.714505 2122110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:35:04.728852 2122110 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:35:04.728969 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.738396 2122110 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:35:04.738528 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.747110 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.755493 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.764049 2122110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:35:04.771587 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.779821 2122110 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.788101 2122110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:35:04.796579 2122110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:35:04.803799 2122110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:35:04.811312 2122110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:35:04.950270 2122110 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:35:05.141253 2122110 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:35:05.141390 2122110 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:35:05.145755 2122110 start.go:564] Will wait 60s for crictl version
	I1222 01:35:05.145875 2122110 ssh_runner.go:195] Run: which crictl
	I1222 01:35:05.150650 2122110 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:35:05.181935 2122110 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:35:05.182072 2122110 ssh_runner.go:195] Run: crio --version
	I1222 01:35:05.212416 2122110 ssh_runner.go:195] Run: crio --version
	I1222 01:35:05.244609 2122110 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:35:05.247581 2122110 cli_runner.go:164] Run: docker network inspect embed-certs-235567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:35:05.264046 2122110 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:35:05.268223 2122110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:35:05.277849 2122110 kubeadm.go:884] updating cluster {Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:35:05.277993 2122110 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:35:05.278055 2122110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:35:05.317228 2122110 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:35:05.317250 2122110 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:35:05.317311 2122110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:35:05.344042 2122110 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:35:05.344064 2122110 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:35:05.344072 2122110 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1222 01:35:05.344180 2122110 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-235567 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:35:05.344272 2122110 ssh_runner.go:195] Run: crio config
	I1222 01:35:05.399295 2122110 cni.go:84] Creating CNI manager for ""
	I1222 01:35:05.399362 2122110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:35:05.399399 2122110 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:35:05.399442 2122110 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-235567 NodeName:embed-certs-235567 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:35:05.399644 2122110 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-235567"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:35:05.399747 2122110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:35:05.407595 2122110 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:35:05.407700 2122110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:35:05.415377 2122110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1222 01:35:05.428302 2122110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:35:05.441419 2122110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1222 01:35:05.455267 2122110 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:35:05.458938 2122110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:35:05.468826 2122110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:35:05.587395 2122110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:35:05.607524 2122110 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567 for IP: 192.168.85.2
	I1222 01:35:05.607585 2122110 certs.go:195] generating shared ca certs ...
	I1222 01:35:05.607623 2122110 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:35:05.607807 2122110 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:35:05.607901 2122110 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:35:05.607935 2122110 certs.go:257] generating profile certs ...
	I1222 01:35:05.608059 2122110 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/client.key
	I1222 01:35:05.608192 2122110 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key.859ed20e
	I1222 01:35:05.608290 2122110 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.key
	I1222 01:35:05.608465 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:35:05.608537 2122110 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:35:05.608564 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:35:05.608621 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:35:05.608673 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:35:05.608731 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:35:05.608819 2122110 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:35:05.609671 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:35:05.642650 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:35:05.669787 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:35:05.692920 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:35:05.711850 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1222 01:35:05.730906 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:35:05.752495 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:35:05.772928 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/embed-certs-235567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:35:05.803929 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:35:05.829710 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:35:05.848244 2122110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:35:05.868983 2122110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:35:05.884828 2122110 ssh_runner.go:195] Run: openssl version
	I1222 01:35:05.891618 2122110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:35:05.899316 2122110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:35:05.907357 2122110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:35:05.911407 2122110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:35:05.911501 2122110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:35:05.953256 2122110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:35:05.961240 2122110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:35:05.969117 2122110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:35:05.977998 2122110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:35:05.982126 2122110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:35:05.982190 2122110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:35:06.023600 2122110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:35:06.031793 2122110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:35:06.040077 2122110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:35:06.048276 2122110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:35:06.052272 2122110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:35:06.052345 2122110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:35:06.094729 2122110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:35:06.102685 2122110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:35:06.107175 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:35:06.149064 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:35:06.190475 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:35:06.231759 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:35:06.276662 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:35:06.324325 2122110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:35:06.416704 2122110 kubeadm.go:401] StartCluster: {Name:embed-certs-235567 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-235567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:35:06.416790 2122110 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:35:06.416857 2122110 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:35:06.444910 2122110 cri.go:96] found id: ""
	I1222 01:35:06.444982 2122110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:35:06.454000 2122110 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:35:06.454022 2122110 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:35:06.454089 2122110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:35:06.466514 2122110 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:35:06.466900 2122110 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-235567" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:35:06.467000 2122110 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-235567" cluster setting kubeconfig missing "embed-certs-235567" context setting]
	I1222 01:35:06.467277 2122110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:35:06.468949 2122110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:35:06.480891 2122110 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:35:06.480929 2122110 kubeadm.go:602] duration metric: took 26.895072ms to restartPrimaryControlPlane
	I1222 01:35:06.480939 2122110 kubeadm.go:403] duration metric: took 64.244425ms to StartCluster
	I1222 01:35:06.480955 2122110 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:35:06.481021 2122110 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:35:06.481996 2122110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:35:06.482217 2122110 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:35:06.482568 2122110 config.go:182] Loaded profile config "embed-certs-235567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:35:06.482644 2122110 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:35:06.482784 2122110 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-235567"
	I1222 01:35:06.482804 2122110 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-235567"
	W1222 01:35:06.482810 2122110 addons.go:248] addon storage-provisioner should already be in state true
	I1222 01:35:06.482838 2122110 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:35:06.482882 2122110 addons.go:70] Setting dashboard=true in profile "embed-certs-235567"
	I1222 01:35:06.482935 2122110 addons.go:239] Setting addon dashboard=true in "embed-certs-235567"
	W1222 01:35:06.482961 2122110 addons.go:248] addon dashboard should already be in state true
	I1222 01:35:06.483012 2122110 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:35:06.483334 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:35:06.483582 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:35:06.484640 2122110 addons.go:70] Setting default-storageclass=true in profile "embed-certs-235567"
	I1222 01:35:06.484668 2122110 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-235567"
	I1222 01:35:06.484954 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:35:06.494499 2122110 out.go:179] * Verifying Kubernetes components...
	I1222 01:35:06.501860 2122110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:35:06.538191 2122110 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:35:06.544989 2122110 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:35:06.548157 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:35:06.548181 2122110 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:35:06.548266 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:06.550087 2122110 addons.go:239] Setting addon default-storageclass=true in "embed-certs-235567"
	W1222 01:35:06.550109 2122110 addons.go:248] addon default-storageclass should already be in state true
	I1222 01:35:06.550132 2122110 host.go:66] Checking if "embed-certs-235567" exists ...
	I1222 01:35:06.550545 2122110 cli_runner.go:164] Run: docker container inspect embed-certs-235567 --format={{.State.Status}}
	I1222 01:35:06.556961 2122110 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:35:06.563595 2122110 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:35:06.563625 2122110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:35:06.563692 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:06.587545 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:06.600160 2122110 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:35:06.600181 2122110 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:35:06.600240 2122110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-235567
	I1222 01:35:06.618031 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:06.643258 2122110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38761 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/embed-certs-235567/id_rsa Username:docker}
	I1222 01:35:06.892847 2122110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:35:06.923934 2122110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:35:06.929220 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:35:06.929293 2122110 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:35:06.942315 2122110 node_ready.go:35] waiting up to 6m0s for node "embed-certs-235567" to be "Ready" ...
	I1222 01:35:06.956434 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:35:06.956462 2122110 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:35:06.982713 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:35:06.982739 2122110 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:35:07.031344 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:35:07.031364 2122110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:35:07.046807 2122110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:35:07.131041 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:35:07.131107 2122110 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:35:07.199554 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:35:07.199618 2122110 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:35:07.258579 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:35:07.258644 2122110 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:35:07.281974 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:35:07.282039 2122110 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:35:07.308281 2122110 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:35:07.308350 2122110 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:35:07.335096 2122110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:35:11.201573 2122110 node_ready.go:49] node "embed-certs-235567" is "Ready"
	I1222 01:35:11.201607 2122110 node_ready.go:38] duration metric: took 4.259190645s for node "embed-certs-235567" to be "Ready" ...
	I1222 01:35:11.201622 2122110 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:35:11.201686 2122110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:35:12.068685 2122110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.144673816s)
	I1222 01:35:12.068799 2122110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.021972614s)
	I1222 01:35:12.439964 2122110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.104785011s)
	I1222 01:35:12.440139 2122110 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.238436959s)
	I1222 01:35:12.440176 2122110 api_server.go:72] duration metric: took 5.957927974s to wait for apiserver process to appear ...
	I1222 01:35:12.440189 2122110 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:35:12.440206 2122110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 01:35:12.443208 2122110 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-235567 addons enable metrics-server
	
	I1222 01:35:12.446535 2122110 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1222 01:35:12.449546 2122110 addons.go:530] duration metric: took 5.966882312s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1222 01:35:12.463133 2122110 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1222 01:35:12.463163 2122110 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1222 01:35:12.940338 2122110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 01:35:12.950524 2122110 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1222 01:35:12.951606 2122110 api_server.go:141] control plane version: v1.34.3
	I1222 01:35:12.951630 2122110 api_server.go:131] duration metric: took 511.433692ms to wait for apiserver health ...
	I1222 01:35:12.951640 2122110 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:35:12.959263 2122110 system_pods.go:59] 8 kube-system pods found
	I1222 01:35:12.959305 2122110 system_pods.go:61] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:35:12.959314 2122110 system_pods.go:61] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:35:12.959319 2122110 system_pods.go:61] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:35:12.959326 2122110 system_pods.go:61] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:35:12.959336 2122110 system_pods.go:61] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:35:12.959351 2122110 system_pods.go:61] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1222 01:35:12.959359 2122110 system_pods.go:61] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:35:12.959368 2122110 system_pods.go:61] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:35:12.959376 2122110 system_pods.go:74] duration metric: took 7.72999ms to wait for pod list to return data ...
	I1222 01:35:12.959412 2122110 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:35:12.963844 2122110 default_sa.go:45] found service account: "default"
	I1222 01:35:12.963870 2122110 default_sa.go:55] duration metric: took 4.451312ms for default service account to be created ...
	I1222 01:35:12.963880 2122110 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:35:12.987714 2122110 system_pods.go:86] 8 kube-system pods found
	I1222 01:35:12.987750 2122110 system_pods.go:89] "coredns-66bc5c9577-q5hbz" [db8e9fbc-5c68-4e74-b834-1141e6a8e09a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:35:12.987760 2122110 system_pods.go:89] "etcd-embed-certs-235567" [1dfdc9b3-d0ae-43fc-ba0b-f61557d241da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:35:12.987765 2122110 system_pods.go:89] "kindnet-mkqcr" [a59d7061-3de4-4c18-ae24-a8087309931e] Running
	I1222 01:35:12.987772 2122110 system_pods.go:89] "kube-apiserver-embed-certs-235567" [1b21aba0-6cda-4d40-8f40-e4683f419975] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:35:12.987779 2122110 system_pods.go:89] "kube-controller-manager-embed-certs-235567" [38db648a-111c-4630-b4f5-61adf4b65131] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:35:12.987791 2122110 system_pods.go:89] "kube-proxy-shsn4" [c9584571-8e4f-41dc-a968-d5eb5e4f3d10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1222 01:35:12.987797 2122110 system_pods.go:89] "kube-scheduler-embed-certs-235567" [a3029af9-c23f-42b6-bef8-73bbddfaaf54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:35:12.987813 2122110 system_pods.go:89] "storage-provisioner" [1e232d85-b68a-49ce-9efd-8a6981b86d96] Running
	I1222 01:35:12.987821 2122110 system_pods.go:126] duration metric: took 23.935703ms to wait for k8s-apps to be running ...
	I1222 01:35:12.987829 2122110 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:35:12.987882 2122110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:35:13.004206 2122110 system_svc.go:56] duration metric: took 16.366545ms WaitForService to wait for kubelet
	I1222 01:35:13.004239 2122110 kubeadm.go:587] duration metric: took 6.52199004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:35:13.004257 2122110 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:35:13.031362 2122110 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:35:13.031399 2122110 node_conditions.go:123] node cpu capacity is 2
	I1222 01:35:13.031413 2122110 node_conditions.go:105] duration metric: took 27.140784ms to run NodePressure ...
	I1222 01:35:13.031428 2122110 start.go:242] waiting for startup goroutines ...
	I1222 01:35:13.031435 2122110 start.go:247] waiting for cluster config update ...
	I1222 01:35:13.031447 2122110 start.go:256] writing updated cluster config ...
	I1222 01:35:13.031715 2122110 ssh_runner.go:195] Run: rm -f paused
	I1222 01:35:13.035475 2122110 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:35:13.058769 2122110 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q5hbz" in "kube-system" namespace to be "Ready" or be gone ...
	W1222 01:35:15.065103 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:17.563941 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:20.065806 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:22.564566 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:25.064761 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:27.563809 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:29.564262 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:31.564914 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:34.065251 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:36.564641 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:39.064728 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:41.564008 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:44.064896 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	W1222 01:35:46.563679 2122110 pod_ready.go:104] pod "coredns-66bc5c9577-q5hbz" is not "Ready", error: <nil>
	I1222 01:35:47.564191 2122110 pod_ready.go:94] pod "coredns-66bc5c9577-q5hbz" is "Ready"
	I1222 01:35:47.564216 2122110 pod_ready.go:86] duration metric: took 34.505417794s for pod "coredns-66bc5c9577-q5hbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.566818 2122110 pod_ready.go:83] waiting for pod "etcd-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.571078 2122110 pod_ready.go:94] pod "etcd-embed-certs-235567" is "Ready"
	I1222 01:35:47.571107 2122110 pod_ready.go:86] duration metric: took 4.263674ms for pod "etcd-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.572974 2122110 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.576900 2122110 pod_ready.go:94] pod "kube-apiserver-embed-certs-235567" is "Ready"
	I1222 01:35:47.576926 2122110 pod_ready.go:86] duration metric: took 3.890935ms for pod "kube-apiserver-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.579001 2122110 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.761933 2122110 pod_ready.go:94] pod "kube-controller-manager-embed-certs-235567" is "Ready"
	I1222 01:35:47.761965 2122110 pod_ready.go:86] duration metric: took 182.939046ms for pod "kube-controller-manager-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:47.963187 2122110 pod_ready.go:83] waiting for pod "kube-proxy-shsn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:48.363058 2122110 pod_ready.go:94] pod "kube-proxy-shsn4" is "Ready"
	I1222 01:35:48.363134 2122110 pod_ready.go:86] duration metric: took 399.919738ms for pod "kube-proxy-shsn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:48.561952 2122110 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:48.962307 2122110 pod_ready.go:94] pod "kube-scheduler-embed-certs-235567" is "Ready"
	I1222 01:35:48.962336 2122110 pod_ready.go:86] duration metric: took 400.360274ms for pod "kube-scheduler-embed-certs-235567" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:35:48.962350 2122110 pod_ready.go:40] duration metric: took 35.926841966s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:35:49.020509 2122110 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:35:49.023602 2122110 out.go:179] * Done! kubectl is now configured to use "embed-certs-235567" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.626639345Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.633237406Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.633273441Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.633295816Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.636705272Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.636740873Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.636765037Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.639998498Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.640035855Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.640061429Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.643293906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:35:52 embed-certs-235567 crio[649]: time="2025-12-22T01:35:52.64332812Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.848850848Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8c710d9c-8163-456f-b3e9-c4a0fc7899b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.850435505Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ecf8282f-3553-46b4-8bd6-7af5fdca2709 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.851740213Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662/dashboard-metrics-scraper" id=9d8eb6c3-2edb-4db6-988d-50c5bb2677e8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.851831049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.85845804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.859109311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.891601154Z" level=info msg="Created container 09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662/dashboard-metrics-scraper" id=9d8eb6c3-2edb-4db6-988d-50c5bb2677e8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.89667364Z" level=info msg="Starting container: 09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207" id=1bf632f6-cf6d-4522-975a-0bc06544d6b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:35:58 embed-certs-235567 crio[649]: time="2025-12-22T01:35:58.899356973Z" level=info msg="Started container" PID=1703 containerID=09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662/dashboard-metrics-scraper id=1bf632f6-cf6d-4522-975a-0bc06544d6b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35c093b75969285df0de874b77dec9dd8dfaaa7d7e4b6b9edc565458b6f1817b
	Dec 22 01:35:58 embed-certs-235567 conmon[1701]: conmon 09e4ee6f88de2e1d5aec <ninfo>: container 1703 exited with status 1
	Dec 22 01:35:59 embed-certs-235567 crio[649]: time="2025-12-22T01:35:59.069193522Z" level=info msg="Removing container: 724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8" id=2f8984c6-3796-4958-a581-4107f4dbf506 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:35:59 embed-certs-235567 crio[649]: time="2025-12-22T01:35:59.08085612Z" level=info msg="Error loading conmon cgroup of container 724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8: cgroup deleted" id=2f8984c6-3796-4958-a581-4107f4dbf506 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 22 01:35:59 embed-certs-235567 crio[649]: time="2025-12-22T01:35:59.086944986Z" level=info msg="Removed container 724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662/dashboard-metrics-scraper" id=2f8984c6-3796-4958-a581-4107f4dbf506 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	09e4ee6f88de2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   35c093b759692       dashboard-metrics-scraper-6ffb444bf9-vx662   kubernetes-dashboard
	d61c20e397c03       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   6d879ff229568       storage-provisioner                          kube-system
	611a97bfc2a29       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   b2493cfbf7c23       kubernetes-dashboard-855c9754f9-5424p        kubernetes-dashboard
	65b574438a73e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago      Running             coredns                     1                   a8dcb3a4702f2       coredns-66bc5c9577-q5hbz                     kube-system
	43dcbf79e9d42       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   6d879ff229568       storage-provisioner                          kube-system
	6e7e75c0138f0       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   d69429cea04c1       kindnet-mkqcr                                kube-system
	85ab7fa5b1f3c       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   68a401c072f7b       busybox                                      default
	61111522e0f72       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                           53 seconds ago      Running             kube-proxy                  1                   6c753d50963ac       kube-proxy-shsn4                             kube-system
	241d1760e627a       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                           59 seconds ago      Running             kube-controller-manager     1                   abbf9f970d312       kube-controller-manager-embed-certs-235567   kube-system
	21f70a0897f78       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           59 seconds ago      Running             etcd                        1                   6e5820b99a046       etcd-embed-certs-235567                      kube-system
	1b831e8cc8f3d       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                           59 seconds ago      Running             kube-apiserver              1                   32f555e4b9d75       kube-apiserver-embed-certs-235567            kube-system
	bef725e3057a7       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                           59 seconds ago      Running             kube-scheduler              1                   709ba04fcc382       kube-scheduler-embed-certs-235567            kube-system
	
	
	==> coredns [65b574438a73e2b2eefd38c4e7e07643ee4f34520f64175c82322af50cf08c67] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32852 - 44244 "HINFO IN 7198693271870947441.562273083697011167. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004445338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-235567
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-235567
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=embed-certs-235567
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_34_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-235567
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:35:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:35:42 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:35:42 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:35:42 +0000   Mon, 22 Dec 2025 01:34:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:35:42 +0000   Mon, 22 Dec 2025 01:34:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-235567
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                d4644b9d-9d43-4a69-8d1b-d5cf0da8245d
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-q5hbz                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-embed-certs-235567                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-mkqcr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-235567             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-235567    200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-shsn4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-235567             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vx662    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5424p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 106s               kube-proxy       
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 114s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 114s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     113s               kubelet          Node embed-certs-235567 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-235567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  113s               kubelet          Node embed-certs-235567 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           109s               node-controller  Node embed-certs-235567 event: Registered Node embed-certs-235567 in Controller
	  Normal   NodeReady                95s                kubelet          Node embed-certs-235567 status is now: NodeReady
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-235567 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-235567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node embed-certs-235567 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                node-controller  Node embed-certs-235567 event: Registered Node embed-certs-235567 in Controller
	
	
	==> dmesg <==
	[Dec22 01:05] overlayfs: idmapped layers are currently not supported
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [21f70a0897f7814aab2cb5d3f2434693fd5a4d6832c30cb70742dc4fa69e6e5d] <==
	{"level":"warn","ts":"2025-12-22T01:35:09.681123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.698183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.718388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.731437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.751322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.767283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.795903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.827836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.846559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.893196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.908166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.943131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.950370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.969730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:09.993441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.017510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.027943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.045756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.070120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.083233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.122299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.158421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.198056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.222166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:35:10.287132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55968","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:36:06 up 1 day,  8:18,  0 user,  load average: 2.57, 2.83, 2.45
	Linux embed-certs-235567 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e7e75c0138f0dafd4e46397e222ce458e1253549363ec3b3c0b35b0eb5a3e22] <==
	I1222 01:35:12.467896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:35:12.477545       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:35:12.477782       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:35:12.477865       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:35:12.477921       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:35:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:35:12.625122       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:35:12.625147       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:35:12.625156       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:35:12.625886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1222 01:35:42.625473       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1222 01:35:42.625659       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1222 01:35:42.625745       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1222 01:35:42.625865       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1222 01:35:44.025608       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:35:44.025724       1 metrics.go:72] Registering metrics
	I1222 01:35:44.025819       1 controller.go:711] "Syncing nftables rules"
	I1222 01:35:52.625630       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:35:52.625685       1 main.go:301] handling current node
	I1222 01:36:02.625695       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:36:02.625728       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1b831e8cc8f3d0a0cb574d1962397875204d05857734e2b9f7aae161ea35fe4e] <==
	I1222 01:35:11.256232       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1222 01:35:11.256633       1 aggregator.go:171] initial CRD sync complete...
	I1222 01:35:11.256682       1 autoregister_controller.go:144] Starting autoregister controller
	I1222 01:35:11.256709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1222 01:35:11.256737       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:35:11.259996       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1222 01:35:11.294633       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1222 01:35:11.299070       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1222 01:35:11.302740       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1222 01:35:11.312313       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1222 01:35:11.312344       1 policy_source.go:240] refreshing policies
	I1222 01:35:11.313542       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1222 01:35:11.313601       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1222 01:35:11.316687       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:35:11.801308       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1222 01:35:11.805760       1 controller.go:667] quota admission added evaluator for: namespaces
	I1222 01:35:11.867798       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:35:11.951359       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1222 01:35:12.021039       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:35:12.060691       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:35:12.327366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.143.12"}
	I1222 01:35:12.416318       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.68.183"}
	I1222 01:35:14.840164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1222 01:35:15.042933       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1222 01:35:15.090054       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [241d1760e627abc7a3d344ac05e9a57f9c7451cb29646bdefe78bfbc841f5c11] <==
	I1222 01:35:14.536271       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1222 01:35:14.538587       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1222 01:35:14.538701       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1222 01:35:14.538856       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:35:14.542060       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1222 01:35:14.542258       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1222 01:35:14.542423       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1222 01:35:14.542486       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1222 01:35:14.542516       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1222 01:35:14.542544       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1222 01:35:14.545189       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1222 01:35:14.547747       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1222 01:35:14.553364       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1222 01:35:14.555585       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1222 01:35:14.559871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:35:14.561556       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1222 01:35:14.564818       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1222 01:35:14.567043       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:35:14.569101       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:35:14.571748       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1222 01:35:14.583678       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1222 01:35:14.583790       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1222 01:35:14.583683       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1222 01:35:14.583724       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1222 01:35:14.583704       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [61111522e0f72b777c735bc37c123c749993783fad4a5178055cf076026c6073] <==
	I1222 01:35:12.462740       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:35:12.539512       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:35:12.639845       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:35:12.639943       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1222 01:35:12.640065       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:35:12.736174       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:35:12.736220       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:35:12.739949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:35:12.740253       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:35:12.740279       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:35:12.743570       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:35:12.743649       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:35:12.743960       1 config.go:200] "Starting service config controller"
	I1222 01:35:12.744004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:35:12.746035       1 config.go:309] "Starting node config controller"
	I1222 01:35:12.746053       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:35:12.746060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:35:12.746356       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:35:12.746374       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:35:12.844705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:35:12.844771       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1222 01:35:12.846702       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bef725e3057a75230c765ab506197a6b277dd374b7da1670267592ddadf181e3] <==
	I1222 01:35:09.735114       1 serving.go:386] Generated self-signed cert in-memory
	W1222 01:35:11.137608       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1222 01:35:11.137646       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1222 01:35:11.137657       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1222 01:35:11.137664       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1222 01:35:11.240680       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1222 01:35:11.240708       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:35:11.242895       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:35:11.242923       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:35:11.244932       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1222 01:35:11.245002       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 01:35:11.343623       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 22 01:35:15 embed-certs-235567 kubelet[776]: W1222 01:35:15.549935     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4081720ccc5a04f221c448981f9355f26408ea71abaa9669d9cbdb8fdd3027c6/crio-b2493cfbf7c23b9af773669df392217b8f3bbc797198470a78dda26f5526a6b8 WatchSource:0}: Error finding container b2493cfbf7c23b9af773669df392217b8f3bbc797198470a78dda26f5526a6b8: Status 404 returned error can't find the container with id b2493cfbf7c23b9af773669df392217b8f3bbc797198470a78dda26f5526a6b8
	Dec 22 01:35:17 embed-certs-235567 kubelet[776]: I1222 01:35:17.107459     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 22 01:35:19 embed-certs-235567 kubelet[776]: I1222 01:35:19.950154     776 scope.go:117] "RemoveContainer" containerID="00f4c49cc4c5ce843371f4d342bd1514aa548a106dd97e9207a61e6d72c02e8e"
	Dec 22 01:35:20 embed-certs-235567 kubelet[776]: I1222 01:35:20.955532     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:20 embed-certs-235567 kubelet[776]: E1222 01:35:20.955686     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:20 embed-certs-235567 kubelet[776]: I1222 01:35:20.957900     776 scope.go:117] "RemoveContainer" containerID="00f4c49cc4c5ce843371f4d342bd1514aa548a106dd97e9207a61e6d72c02e8e"
	Dec 22 01:35:21 embed-certs-235567 kubelet[776]: I1222 01:35:21.959815     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:21 embed-certs-235567 kubelet[776]: E1222 01:35:21.959945     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:24 embed-certs-235567 kubelet[776]: I1222 01:35:24.699904     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:24 embed-certs-235567 kubelet[776]: E1222 01:35:24.700093     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:25 embed-certs-235567 kubelet[776]: I1222 01:35:25.558402     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5424p" podStartSLOduration=2.188318763 podStartE2EDuration="10.558385584s" podCreationTimestamp="2025-12-22 01:35:15 +0000 UTC" firstStartedPulling="2025-12-22 01:35:15.553070713 +0000 UTC m=+9.947349182" lastFinishedPulling="2025-12-22 01:35:23.923137534 +0000 UTC m=+18.317416003" observedRunningTime="2025-12-22 01:35:24.978631265 +0000 UTC m=+19.372909766" watchObservedRunningTime="2025-12-22 01:35:25.558385584 +0000 UTC m=+19.952664053"
	Dec 22 01:35:35 embed-certs-235567 kubelet[776]: I1222 01:35:35.848461     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:35 embed-certs-235567 kubelet[776]: I1222 01:35:35.993175     776 scope.go:117] "RemoveContainer" containerID="468235abba7b2c0bc5aea1e7f69b93dc5b680db423aab3759eb63bc4896a26c5"
	Dec 22 01:35:35 embed-certs-235567 kubelet[776]: I1222 01:35:35.994170     776 scope.go:117] "RemoveContainer" containerID="724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8"
	Dec 22 01:35:35 embed-certs-235567 kubelet[776]: E1222 01:35:35.996244     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:43 embed-certs-235567 kubelet[776]: I1222 01:35:43.025381     776 scope.go:117] "RemoveContainer" containerID="43dcbf79e9d4272e9576e394ab66d96aa9a891411c988a1f8f0c7ddaf6aaed7c"
	Dec 22 01:35:44 embed-certs-235567 kubelet[776]: I1222 01:35:44.700436     776 scope.go:117] "RemoveContainer" containerID="724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8"
	Dec 22 01:35:44 embed-certs-235567 kubelet[776]: E1222 01:35:44.701077     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:35:58 embed-certs-235567 kubelet[776]: I1222 01:35:58.848251     776 scope.go:117] "RemoveContainer" containerID="724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8"
	Dec 22 01:35:59 embed-certs-235567 kubelet[776]: I1222 01:35:59.067367     776 scope.go:117] "RemoveContainer" containerID="724268b148d1c8bee813957b338abc7835bc4d0f7493a38edf487c57bb2f45b8"
	Dec 22 01:35:59 embed-certs-235567 kubelet[776]: I1222 01:35:59.067587     776 scope.go:117] "RemoveContainer" containerID="09e4ee6f88de2e1d5aecf8fae028a6e869e47415f573be28869a8739a0e2a207"
	Dec 22 01:35:59 embed-certs-235567 kubelet[776]: E1222 01:35:59.068048     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vx662_kubernetes-dashboard(a710b15e-d2b7-455f-a2d0-02424063c158)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vx662" podUID="a710b15e-d2b7-455f-a2d0-02424063c158"
	Dec 22 01:36:01 embed-certs-235567 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 22 01:36:01 embed-certs-235567 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 22 01:36:01 embed-certs-235567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [611a97bfc2a29f0e10c0196b1d3eba98b24c290267f58aab6f754d2c388f96e7] <==
	2025/12/22 01:35:23 Using namespace: kubernetes-dashboard
	2025/12/22 01:35:23 Using in-cluster config to connect to apiserver
	2025/12/22 01:35:23 Using secret token for csrf signing
	2025/12/22 01:35:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/22 01:35:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/22 01:35:24 Successful initial request to the apiserver, version: v1.34.3
	2025/12/22 01:35:24 Generating JWE encryption key
	2025/12/22 01:35:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/22 01:35:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/22 01:35:24 Initializing JWE encryption key from synchronized object
	2025/12/22 01:35:24 Creating in-cluster Sidecar client
	2025/12/22 01:35:24 Serving insecurely on HTTP port: 9090
	2025/12/22 01:35:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:35:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:35:23 Starting overwatch
	
	
	==> storage-provisioner [43dcbf79e9d4272e9576e394ab66d96aa9a891411c988a1f8f0c7ddaf6aaed7c] <==
	I1222 01:35:12.370964       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1222 01:35:42.372662       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d61c20e397c0328b0fd6e7bd1535b2a978028501bf444674fce6879659e6ae3b] <==
	I1222 01:35:43.079844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:35:43.094693       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:35:43.095245       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1222 01:35:43.099421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:35:46.554942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:35:50.814629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:35:54.412819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:35:57.466325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:00.491820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:00.505301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:36:00.505888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:36:00.506276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-235567_f519d8a3-c794-41b5-98b7-77d342de02ae!
	I1222 01:36:00.506490       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d572cc3c-b104-421c-a93c-f4c2c45f34e5", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-235567_f519d8a3-c794-41b5-98b7-77d342de02ae became leader
	W1222 01:36:00.516435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:00.521314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:36:00.609909       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-235567_f519d8a3-c794-41b5-98b7-77d342de02ae!
	W1222 01:36:02.531501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:02.536774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:04.549353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:04.558159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-235567 -n embed-certs-235567
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-235567 -n embed-certs-235567: exit status 2 (375.914458ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-235567 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (276.891064ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:37:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-733235 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-733235 describe deploy/metrics-server -n kube-system: exit status 1 (92.276383ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-733235 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-733235
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-733235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269",
	        "Created": "2025-12-22T01:36:14.986448621Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2126653,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:36:15.050130569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/hostname",
	        "HostsPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/hosts",
	        "LogPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269-json.log",
	        "Name": "/default-k8s-diff-port-733235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-733235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-733235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269",
	                "LowerDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-733235",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-733235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-733235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-733235",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-733235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d59c3235ba88e29381d96f1e399c116a4ac423631c589e279760e80a28b8d077",
	            "SandboxKey": "/var/run/docker/netns/d59c3235ba88",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38766"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38767"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38770"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38768"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-733235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:4d:51:67:55:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "587a11d4872e57b94e293749bb81cafed3a21b4741fd9542ba2b75e1a005d466",
	                    "EndpointID": "539d3153503914bbe5b269a47deb6bf2362128b9c0767b91c8434b87d579c353",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-733235",
	                        "7c78b7294c07"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-733235 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-733235 logs -n 25: (1.160600642s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-433911                                                                                                                                                                                                                        │ cert-options-433911          │ jenkins │ v1.37.0 │ 22 Dec 25 01:30 UTC │ 22 Dec 25 01:31 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:31 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-535197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ stop    │ -p old-k8s-version-535197 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-535197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-031738       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ delete  │ -p cert-expiration-031738                                                                                                                                                                                                                     │ cert-expiration-031738       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                               │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                   │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                         │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                         │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                               │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:36:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:36:10.162672 2126229 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:36:10.162830 2126229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:10.162842 2126229 out.go:374] Setting ErrFile to fd 2...
	I1222 01:36:10.162847 2126229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:10.163121 2126229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:36:10.163585 2126229 out.go:368] Setting JSON to false
	I1222 01:36:10.164478 2126229 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116320,"bootTime":1766251051,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:36:10.164540 2126229 start.go:143] virtualization:  
	I1222 01:36:10.170243 2126229 out.go:179] * [default-k8s-diff-port-733235] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:36:10.173561 2126229 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:36:10.173621 2126229 notify.go:221] Checking for updates...
	I1222 01:36:10.179744 2126229 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:36:10.182723 2126229 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:36:10.185786 2126229 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:36:10.188781 2126229 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:36:10.191805 2126229 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:36:10.195360 2126229 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:10.195499 2126229 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:36:10.218550 2126229 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:36:10.218677 2126229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:10.280114 2126229 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:36:10.270868621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:10.280218 2126229 docker.go:319] overlay module found
	I1222 01:36:10.283523 2126229 out.go:179] * Using the docker driver based on user configuration
	I1222 01:36:10.286447 2126229 start.go:309] selected driver: docker
	I1222 01:36:10.286467 2126229 start.go:928] validating driver "docker" against <nil>
	I1222 01:36:10.286481 2126229 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:36:10.287219 2126229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:10.355995 2126229 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:36:10.346341984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:10.356180 2126229 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:36:10.356432 2126229 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:36:10.359568 2126229 out.go:179] * Using Docker driver with root privileges
	I1222 01:36:10.362500 2126229 cni.go:84] Creating CNI manager for ""
	I1222 01:36:10.362583 2126229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:36:10.362596 2126229 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:36:10.362684 2126229 start.go:353] cluster config:
	{Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:10.365878 2126229 out.go:179] * Starting "default-k8s-diff-port-733235" primary control-plane node in "default-k8s-diff-port-733235" cluster
	I1222 01:36:10.368889 2126229 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:36:10.371857 2126229 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:36:10.374758 2126229 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:36:10.374819 2126229 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:36:10.374836 2126229 cache.go:65] Caching tarball of preloaded images
	I1222 01:36:10.374859 2126229 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:36:10.374939 2126229 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:36:10.374949 2126229 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:36:10.375049 2126229 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/config.json ...
	I1222 01:36:10.375082 2126229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/config.json: {Name:mk0cc7881f9bdeabd3a6b5c3ca6eb16e7a5670ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:10.394740 2126229 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:36:10.394762 2126229 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:36:10.394776 2126229 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:36:10.394811 2126229 start.go:360] acquireMachinesLock for default-k8s-diff-port-733235: {Name:mk42f93f802a61825395428c6e75b6089866c8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:10.394925 2126229 start.go:364] duration metric: took 85.315µs to acquireMachinesLock for "default-k8s-diff-port-733235"
	I1222 01:36:10.394957 2126229 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:36:10.395031 2126229 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:36:10.398556 2126229 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:36:10.398813 2126229 start.go:159] libmachine.API.Create for "default-k8s-diff-port-733235" (driver="docker")
	I1222 01:36:10.398858 2126229 client.go:173] LocalClient.Create starting
	I1222 01:36:10.398933 2126229 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:36:10.398984 2126229 main.go:144] libmachine: Decoding PEM data...
	I1222 01:36:10.399007 2126229 main.go:144] libmachine: Parsing certificate...
	I1222 01:36:10.399065 2126229 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:36:10.399087 2126229 main.go:144] libmachine: Decoding PEM data...
	I1222 01:36:10.399099 2126229 main.go:144] libmachine: Parsing certificate...
	I1222 01:36:10.399487 2126229 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-733235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:36:10.416568 2126229 cli_runner.go:211] docker network inspect default-k8s-diff-port-733235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:36:10.416656 2126229 network_create.go:284] running [docker network inspect default-k8s-diff-port-733235] to gather additional debugging logs...
	I1222 01:36:10.416682 2126229 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-733235
	W1222 01:36:10.432600 2126229 cli_runner.go:211] docker network inspect default-k8s-diff-port-733235 returned with exit code 1
	I1222 01:36:10.432631 2126229 network_create.go:287] error running [docker network inspect default-k8s-diff-port-733235]: docker network inspect default-k8s-diff-port-733235: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-733235 not found
	I1222 01:36:10.432645 2126229 network_create.go:289] output of [docker network inspect default-k8s-diff-port-733235]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-733235 not found
	
	** /stderr **
	I1222 01:36:10.432754 2126229 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:36:10.449912 2126229 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:36:10.450296 2126229 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:36:10.450652 2126229 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:36:10.450901 2126229 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 01:36:10.451325 2126229 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a49ad0}
	I1222 01:36:10.451357 2126229 network_create.go:124] attempt to create docker network default-k8s-diff-port-733235 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:36:10.451414 2126229 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-733235 default-k8s-diff-port-733235
	I1222 01:36:10.512353 2126229 network_create.go:108] docker network default-k8s-diff-port-733235 192.168.85.0/24 created
	I1222 01:36:10.512386 2126229 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-733235" container
	I1222 01:36:10.512481 2126229 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:36:10.528289 2126229 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-733235 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-733235 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:36:10.545695 2126229 oci.go:103] Successfully created a docker volume default-k8s-diff-port-733235
	I1222 01:36:10.545796 2126229 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-733235-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-733235 --entrypoint /usr/bin/test -v default-k8s-diff-port-733235:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:36:11.122709 2126229 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-733235
	I1222 01:36:11.122780 2126229 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:36:11.122792 2126229 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:36:11.122874 2126229 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-733235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:36:14.920626 2126229 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-733235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.797708995s)
	I1222 01:36:14.920663 2126229 kic.go:203] duration metric: took 3.79786753s to extract preloaded images to volume ...
	W1222 01:36:14.920802 2126229 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:36:14.920918 2126229 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:36:14.972203 2126229 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-733235 --name default-k8s-diff-port-733235 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-733235 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-733235 --network default-k8s-diff-port-733235 --ip 192.168.85.2 --volume default-k8s-diff-port-733235:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:36:15.266686 2126229 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Running}}
	I1222 01:36:15.292884 2126229 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:36:15.315714 2126229 cli_runner.go:164] Run: docker exec default-k8s-diff-port-733235 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:36:15.367726 2126229 oci.go:144] the created container "default-k8s-diff-port-733235" has a running status.
	I1222 01:36:15.367754 2126229 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa...
	I1222 01:36:15.594028 2126229 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:36:15.629706 2126229 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:36:15.664682 2126229 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:36:15.664709 2126229 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-733235 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:36:15.716260 2126229 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:36:15.746906 2126229 machine.go:94] provisionDockerMachine start ...
	I1222 01:36:15.747030 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:15.778495 2126229 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:15.778888 2126229 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38766 <nil> <nil>}
	I1222 01:36:15.778908 2126229 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:36:15.780042 2126229 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:36:18.912855 2126229 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733235
	
	I1222 01:36:18.912881 2126229 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-733235"
	I1222 01:36:18.912953 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:18.930635 2126229 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:18.930946 2126229 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38766 <nil> <nil>}
	I1222 01:36:18.930964 2126229 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733235 && echo "default-k8s-diff-port-733235" | sudo tee /etc/hostname
	I1222 01:36:19.071489 2126229 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733235
	
	I1222 01:36:19.071585 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:19.090339 2126229 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:19.090651 2126229 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38766 <nil> <nil>}
	I1222 01:36:19.090673 2126229 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733235/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:36:19.233594 2126229 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:36:19.233621 2126229 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:36:19.233647 2126229 ubuntu.go:190] setting up certificates
	I1222 01:36:19.233656 2126229 provision.go:84] configureAuth start
	I1222 01:36:19.233714 2126229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:36:19.250728 2126229 provision.go:143] copyHostCerts
	I1222 01:36:19.250803 2126229 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:36:19.250819 2126229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:36:19.250899 2126229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:36:19.250998 2126229 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:36:19.251009 2126229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:36:19.251035 2126229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:36:19.251090 2126229 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:36:19.251099 2126229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:36:19.251121 2126229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:36:19.251170 2126229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733235 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-733235 localhost minikube]
	I1222 01:36:19.418796 2126229 provision.go:177] copyRemoteCerts
	I1222 01:36:19.418878 2126229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:36:19.418946 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:19.436073 2126229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38766 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:36:19.533337 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:36:19.552348 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:36:19.569629 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1222 01:36:19.586807 2126229 provision.go:87] duration metric: took 353.137637ms to configureAuth
	I1222 01:36:19.586836 2126229 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:36:19.587016 2126229 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:36:19.587125 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:19.603747 2126229 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:19.604283 2126229 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38766 <nil> <nil>}
	I1222 01:36:19.604309 2126229 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:36:19.925121 2126229 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:36:19.925187 2126229 machine.go:97] duration metric: took 4.178256644s to provisionDockerMachine
	I1222 01:36:19.925213 2126229 client.go:176] duration metric: took 9.526343316s to LocalClient.Create
	I1222 01:36:19.925246 2126229 start.go:167] duration metric: took 9.526439476s to libmachine.API.Create "default-k8s-diff-port-733235"
	I1222 01:36:19.925266 2126229 start.go:293] postStartSetup for "default-k8s-diff-port-733235" (driver="docker")
	I1222 01:36:19.925289 2126229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:36:19.925392 2126229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:36:19.925461 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:19.943041 2126229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38766 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:36:20.038090 2126229 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:36:20.041512 2126229 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:36:20.041543 2126229 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:36:20.041556 2126229 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:36:20.041616 2126229 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:36:20.041726 2126229 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:36:20.041836 2126229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:36:20.049525 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:36:20.067917 2126229 start.go:296] duration metric: took 142.624023ms for postStartSetup
	I1222 01:36:20.068347 2126229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:36:20.087020 2126229 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/config.json ...
	I1222 01:36:20.087316 2126229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:36:20.087375 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:20.106529 2126229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38766 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:36:20.202921 2126229 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:36:20.208063 2126229 start.go:128] duration metric: took 9.813010641s to createHost
	I1222 01:36:20.208091 2126229 start.go:83] releasing machines lock for "default-k8s-diff-port-733235", held for 9.813151576s
	I1222 01:36:20.208165 2126229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:36:20.225349 2126229 ssh_runner.go:195] Run: cat /version.json
	I1222 01:36:20.225366 2126229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:36:20.225406 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:20.225451 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:20.245035 2126229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38766 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:36:20.259715 2126229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38766 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:36:20.340847 2126229 ssh_runner.go:195] Run: systemctl --version
	I1222 01:36:20.451246 2126229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:36:20.499341 2126229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:36:20.503701 2126229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:36:20.503773 2126229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:36:20.533241 2126229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:36:20.533266 2126229 start.go:496] detecting cgroup driver to use...
	I1222 01:36:20.533297 2126229 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:36:20.533418 2126229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:36:20.550332 2126229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:36:20.563034 2126229 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:36:20.563121 2126229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:36:20.582141 2126229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:36:20.603016 2126229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:36:20.732171 2126229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:36:20.849225 2126229 docker.go:234] disabling docker service ...
	I1222 01:36:20.849341 2126229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:36:20.871984 2126229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:36:20.885470 2126229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:36:20.996368 2126229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:36:21.110528 2126229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:36:21.123840 2126229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:36:21.138040 2126229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:36:21.138118 2126229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:36:21.147091 2126229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:36:21.147174 2126229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:36:21.156659 2126229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:36:21.165730 2126229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:36:21.174804 2126229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:36:21.183354 2126229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:36:21.192560 2126229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:36:21.206292 2126229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:36:21.215349 2126229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:36:21.223963 2126229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:36:21.231485 2126229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:21.339625 2126229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:36:21.527555 2126229 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:36:21.527634 2126229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:36:21.531599 2126229 start.go:564] Will wait 60s for crictl version
	I1222 01:36:21.531667 2126229 ssh_runner.go:195] Run: which crictl
	I1222 01:36:21.535302 2126229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:36:21.559483 2126229 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:36:21.559594 2126229 ssh_runner.go:195] Run: crio --version
	I1222 01:36:21.587373 2126229 ssh_runner.go:195] Run: crio --version
	I1222 01:36:21.618420 2126229 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:36:21.621204 2126229 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-733235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:36:21.636675 2126229 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:36:21.640372 2126229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:36:21.649711 2126229 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:36:21.649823 2126229 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:36:21.649893 2126229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:36:21.681902 2126229 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:36:21.681924 2126229 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:36:21.681980 2126229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:36:21.708795 2126229 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:36:21.708819 2126229 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:36:21.708827 2126229 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.3 crio true true} ...
	I1222 01:36:21.708913 2126229 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-733235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:36:21.708999 2126229 ssh_runner.go:195] Run: crio config
	I1222 01:36:21.774957 2126229 cni.go:84] Creating CNI manager for ""
	I1222 01:36:21.774980 2126229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:36:21.774998 2126229 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:36:21.775044 2126229 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733235 NodeName:default-k8s-diff-port-733235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:36:21.775177 2126229 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733235"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:36:21.775251 2126229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:36:21.783155 2126229 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:36:21.783223 2126229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:36:21.790640 2126229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1222 01:36:21.803230 2126229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:36:21.816209 2126229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1222 01:36:21.828031 2126229 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:36:21.831683 2126229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:36:21.842227 2126229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:21.963834 2126229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:36:21.980310 2126229 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235 for IP: 192.168.85.2
	I1222 01:36:21.980328 2126229 certs.go:195] generating shared ca certs ...
	I1222 01:36:21.980344 2126229 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:21.980478 2126229 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:36:21.980518 2126229 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:36:21.980524 2126229 certs.go:257] generating profile certs ...
	I1222 01:36:21.980578 2126229 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.key
	I1222 01:36:21.980607 2126229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt with IP's: []
	I1222 01:36:22.134007 2126229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt ...
	I1222 01:36:22.134040 2126229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: {Name:mk6a6b649904285b14194de73e2c2c4c218fed30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:22.134268 2126229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.key ...
	I1222 01:36:22.134284 2126229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.key: {Name:mk7da87766a1f9a453f555a0c50625f55a00e004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:22.134390 2126229 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key.be3f76d2
	I1222 01:36:22.134410 2126229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.crt.be3f76d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:36:22.496548 2126229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.crt.be3f76d2 ...
	I1222 01:36:22.496582 2126229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.crt.be3f76d2: {Name:mkd2baeda4ea7198f77a7bbf241fe42c49fbab96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:22.496781 2126229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key.be3f76d2 ...
	I1222 01:36:22.496797 2126229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key.be3f76d2: {Name:mk068ef898363a833d9428d78f08390a3a8fb08c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:22.496892 2126229 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.crt.be3f76d2 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.crt
	I1222 01:36:22.497012 2126229 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key.be3f76d2 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key
	I1222 01:36:22.497086 2126229 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.key
	I1222 01:36:22.497109 2126229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.crt with IP's: []
	I1222 01:36:22.957907 2126229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.crt ...
	I1222 01:36:22.957944 2126229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.crt: {Name:mkd3e1cd0716fdd1b3dda0cab484aa1dc0dde271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:22.958149 2126229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.key ...
	I1222 01:36:22.958172 2126229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.key: {Name:mk1c5a678397ff910da847088a0df267092af6d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:22.958403 2126229 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:36:22.958457 2126229 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:36:22.958478 2126229 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:36:22.958511 2126229 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:36:22.958544 2126229 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:36:22.958573 2126229 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:36:22.958624 2126229 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:36:22.959193 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:36:22.985760 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:36:23.005755 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:36:23.024056 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:36:23.041798 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1222 01:36:23.058830 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:36:23.076480 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:36:23.094015 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:36:23.111597 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:36:23.129081 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:36:23.146070 2126229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:36:23.163726 2126229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:36:23.175977 2126229 ssh_runner.go:195] Run: openssl version
	I1222 01:36:23.184224 2126229 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:36:23.191372 2126229 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:36:23.199361 2126229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:36:23.202981 2126229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:36:23.203058 2126229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:36:23.247623 2126229 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:36:23.255046 2126229 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:36:23.262224 2126229 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:36:23.269136 2126229 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:36:23.276679 2126229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:36:23.280198 2126229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:36:23.280268 2126229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:36:23.320985 2126229 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:36:23.328550 2126229 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:36:23.335604 2126229 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:23.342821 2126229 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:36:23.349960 2126229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:23.353803 2126229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:23.353887 2126229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:23.395547 2126229 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:36:23.402851 2126229 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:36:23.409713 2126229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:36:23.413039 2126229 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:36:23.413113 2126229 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:23.413192 2126229 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:36:23.413245 2126229 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:36:23.439885 2126229 cri.go:96] found id: ""
	I1222 01:36:23.439962 2126229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:36:23.447425 2126229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:36:23.454913 2126229 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:36:23.454980 2126229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:36:23.462737 2126229 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:36:23.462761 2126229 kubeadm.go:158] found existing configuration files:
	
	I1222 01:36:23.462834 2126229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1222 01:36:23.471009 2126229 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:36:23.471077 2126229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:36:23.478288 2126229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1222 01:36:23.485875 2126229 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:36:23.485943 2126229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:36:23.493111 2126229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1222 01:36:23.500508 2126229 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:36:23.500571 2126229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:36:23.508022 2126229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1222 01:36:23.515565 2126229 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:36:23.515629 2126229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:36:23.522845 2126229 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:36:23.571529 2126229 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 01:36:23.571846 2126229 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:36:23.612220 2126229 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:36:23.612303 2126229 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:36:23.612342 2126229 kubeadm.go:319] OS: Linux
	I1222 01:36:23.612391 2126229 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:36:23.612442 2126229 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:36:23.612501 2126229 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:36:23.612552 2126229 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:36:23.612604 2126229 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:36:23.612664 2126229 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:36:23.612713 2126229 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:36:23.612765 2126229 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:36:23.612813 2126229 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:36:23.711332 2126229 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:36:23.711446 2126229 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:36:23.711540 2126229 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:36:23.720023 2126229 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:36:23.726540 2126229 out.go:252]   - Generating certificates and keys ...
	I1222 01:36:23.726724 2126229 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:36:23.726805 2126229 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:36:23.984682 2126229 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:36:24.107341 2126229 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:36:24.298345 2126229 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:36:24.697138 2126229 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:36:25.523588 2126229 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:36:25.523805 2126229 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-733235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:36:26.756936 2126229 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:36:26.757291 2126229 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-733235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:36:27.139449 2126229 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:36:27.713617 2126229 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:36:28.171630 2126229 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:36:28.171991 2126229 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:36:28.714888 2126229 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:36:29.089109 2126229 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:36:30.046143 2126229 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:36:30.514006 2126229 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:36:31.115378 2126229 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:36:31.115500 2126229 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:36:31.115577 2126229 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:36:31.118987 2126229 out.go:252]   - Booting up control plane ...
	I1222 01:36:31.119136 2126229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:36:31.119225 2126229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:36:31.119308 2126229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:36:31.154414 2126229 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:36:31.154525 2126229 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:36:31.163958 2126229 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:36:31.164059 2126229 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:36:31.164098 2126229 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:36:31.289939 2126229 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:36:31.290074 2126229 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:36:32.288375 2126229 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00078233s
	I1222 01:36:32.292168 2126229 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 01:36:32.292269 2126229 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1222 01:36:32.292363 2126229 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 01:36:32.292446 2126229 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1222 01:36:34.678652 2126229 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.386109857s
	I1222 01:36:36.653473 2126229 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.361265954s
	I1222 01:36:38.293799 2126229 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001443118s
	I1222 01:36:38.325469 2126229 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 01:36:38.342756 2126229 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 01:36:38.355308 2126229 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 01:36:38.355841 2126229 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-733235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 01:36:38.367592 2126229 kubeadm.go:319] [bootstrap-token] Using token: 3iom77.yoz09916ru1iy56p
	I1222 01:36:38.370521 2126229 out.go:252]   - Configuring RBAC rules ...
	I1222 01:36:38.370652 2126229 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 01:36:38.376658 2126229 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 01:36:38.384364 2126229 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 01:36:38.387974 2126229 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 01:36:38.391817 2126229 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 01:36:38.395601 2126229 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 01:36:38.704538 2126229 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 01:36:39.124905 2126229 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 01:36:39.700642 2126229 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 01:36:39.703117 2126229 kubeadm.go:319] 
	I1222 01:36:39.703209 2126229 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 01:36:39.703216 2126229 kubeadm.go:319] 
	I1222 01:36:39.703293 2126229 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 01:36:39.703298 2126229 kubeadm.go:319] 
	I1222 01:36:39.703323 2126229 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 01:36:39.703384 2126229 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 01:36:39.703447 2126229 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 01:36:39.703452 2126229 kubeadm.go:319] 
	I1222 01:36:39.703508 2126229 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 01:36:39.703512 2126229 kubeadm.go:319] 
	I1222 01:36:39.703560 2126229 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 01:36:39.703564 2126229 kubeadm.go:319] 
	I1222 01:36:39.703616 2126229 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 01:36:39.703692 2126229 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 01:36:39.703760 2126229 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 01:36:39.703768 2126229 kubeadm.go:319] 
	I1222 01:36:39.703852 2126229 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 01:36:39.703929 2126229 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 01:36:39.703933 2126229 kubeadm.go:319] 
	I1222 01:36:39.704016 2126229 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 3iom77.yoz09916ru1iy56p \
	I1222 01:36:39.704121 2126229 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d \
	I1222 01:36:39.704141 2126229 kubeadm.go:319] 	--control-plane 
	I1222 01:36:39.704145 2126229 kubeadm.go:319] 
	I1222 01:36:39.704229 2126229 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 01:36:39.704242 2126229 kubeadm.go:319] 
	I1222 01:36:39.704326 2126229 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 3iom77.yoz09916ru1iy56p \
	I1222 01:36:39.704428 2126229 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d 
	I1222 01:36:39.709015 2126229 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 01:36:39.709243 2126229 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:36:39.709375 2126229 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:36:39.709393 2126229 cni.go:84] Creating CNI manager for ""
	I1222 01:36:39.709400 2126229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:36:39.712659 2126229 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1222 01:36:39.715568 2126229 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1222 01:36:39.719697 2126229 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1222 01:36:39.719715 2126229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1222 01:36:39.733187 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1222 01:36:40.033840 2126229 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 01:36:40.034010 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:40.034225 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-733235 minikube.k8s.io/updated_at=2025_12_22T01_36_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=default-k8s-diff-port-733235 minikube.k8s.io/primary=true
	I1222 01:36:40.290255 2126229 ops.go:34] apiserver oom_adj: -16
	I1222 01:36:40.290372 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:40.790696 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:41.291170 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:41.791179 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:42.291481 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:42.791405 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:43.291240 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:43.790488 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:44.291176 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:44.790431 2126229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:36:44.925387 2126229 kubeadm.go:1114] duration metric: took 4.891443199s to wait for elevateKubeSystemPrivileges
	I1222 01:36:44.925420 2126229 kubeadm.go:403] duration metric: took 21.512333991s to StartCluster
	I1222 01:36:44.925438 2126229 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:44.925499 2126229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:36:44.926126 2126229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:44.926346 2126229 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:36:44.926475 2126229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 01:36:44.926726 2126229 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:36:44.926696 2126229 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:36:44.926822 2126229 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-733235"
	I1222 01:36:44.926838 2126229 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-733235"
	I1222 01:36:44.926863 2126229 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:36:44.926869 2126229 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-733235"
	I1222 01:36:44.926896 2126229 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733235"
	I1222 01:36:44.927257 2126229 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:36:44.927408 2126229 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:36:44.931751 2126229 out.go:179] * Verifying Kubernetes components...
	I1222 01:36:44.937470 2126229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:44.966423 2126229 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:36:44.969539 2126229 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:36:44.969567 2126229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:36:44.969639 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:44.973497 2126229 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-733235"
	I1222 01:36:44.973536 2126229 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:36:44.973945 2126229 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:36:45.019147 2126229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38766 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:36:45.030369 2126229 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:45.030391 2126229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:36:45.030462 2126229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:36:45.060107 2126229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38766 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:36:45.320064 2126229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 01:36:45.388523 2126229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:36:45.542449 2126229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:36:45.600793 2126229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:45.944040 2126229 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1222 01:36:45.944894 2126229 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733235" to be "Ready" ...
	I1222 01:36:46.451772 2126229 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-733235" context rescaled to 1 replicas
	I1222 01:36:46.533696 2126229 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1222 01:36:46.536599 2126229 addons.go:530] duration metric: took 1.609897555s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1222 01:36:47.948111 2126229 node_ready.go:57] node "default-k8s-diff-port-733235" has "Ready":"False" status (will retry)
	W1222 01:36:50.448144 2126229 node_ready.go:57] node "default-k8s-diff-port-733235" has "Ready":"False" status (will retry)
	W1222 01:36:52.448652 2126229 node_ready.go:57] node "default-k8s-diff-port-733235" has "Ready":"False" status (will retry)
	W1222 01:36:54.947504 2126229 node_ready.go:57] node "default-k8s-diff-port-733235" has "Ready":"False" status (will retry)
	W1222 01:36:56.947735 2126229 node_ready.go:57] node "default-k8s-diff-port-733235" has "Ready":"False" status (will retry)
	I1222 01:36:58.449133 2126229 node_ready.go:49] node "default-k8s-diff-port-733235" is "Ready"
	I1222 01:36:58.449183 2126229 node_ready.go:38] duration metric: took 12.504251941s for node "default-k8s-diff-port-733235" to be "Ready" ...
	I1222 01:36:58.449202 2126229 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:36:58.449290 2126229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:36:58.466527 2126229 api_server.go:72] duration metric: took 13.540142708s to wait for apiserver process to appear ...
	I1222 01:36:58.466552 2126229 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:36:58.466594 2126229 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1222 01:36:58.491008 2126229 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1222 01:36:58.494185 2126229 api_server.go:141] control plane version: v1.34.3
	I1222 01:36:58.494226 2126229 api_server.go:131] duration metric: took 27.663082ms to wait for apiserver health ...
	I1222 01:36:58.494235 2126229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:36:58.498527 2126229 system_pods.go:59] 8 kube-system pods found
	I1222 01:36:58.498570 2126229 system_pods.go:61] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:36:58.498578 2126229 system_pods.go:61] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running
	I1222 01:36:58.498584 2126229 system_pods.go:61] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:36:58.498588 2126229 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running
	I1222 01:36:58.498600 2126229 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running
	I1222 01:36:58.498609 2126229 system_pods.go:61] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:36:58.498613 2126229 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running
	I1222 01:36:58.498621 2126229 system_pods.go:61] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:36:58.498638 2126229 system_pods.go:74] duration metric: took 4.396159ms to wait for pod list to return data ...
	I1222 01:36:58.498648 2126229 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:36:58.501533 2126229 default_sa.go:45] found service account: "default"
	I1222 01:36:58.501564 2126229 default_sa.go:55] duration metric: took 2.90653ms for default service account to be created ...
	I1222 01:36:58.501585 2126229 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:36:58.504821 2126229 system_pods.go:86] 8 kube-system pods found
	I1222 01:36:58.504860 2126229 system_pods.go:89] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:36:58.504868 2126229 system_pods.go:89] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running
	I1222 01:36:58.504874 2126229 system_pods.go:89] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:36:58.504879 2126229 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running
	I1222 01:36:58.504883 2126229 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running
	I1222 01:36:58.504887 2126229 system_pods.go:89] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:36:58.504895 2126229 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running
	I1222 01:36:58.504901 2126229 system_pods.go:89] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:36:58.504938 2126229 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1222 01:36:58.718988 2126229 system_pods.go:86] 8 kube-system pods found
	I1222 01:36:58.719036 2126229 system_pods.go:89] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:36:58.719044 2126229 system_pods.go:89] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running
	I1222 01:36:58.719056 2126229 system_pods.go:89] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:36:58.719061 2126229 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running
	I1222 01:36:58.719066 2126229 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running
	I1222 01:36:58.719070 2126229 system_pods.go:89] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:36:58.719078 2126229 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running
	I1222 01:36:58.719089 2126229 system_pods.go:89] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:36:59.010398 2126229 system_pods.go:86] 8 kube-system pods found
	I1222 01:36:59.010436 2126229 system_pods.go:89] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:36:59.010444 2126229 system_pods.go:89] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running
	I1222 01:36:59.010451 2126229 system_pods.go:89] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:36:59.010456 2126229 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running
	I1222 01:36:59.010461 2126229 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running
	I1222 01:36:59.010465 2126229 system_pods.go:89] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:36:59.010470 2126229 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running
	I1222 01:36:59.010479 2126229 system_pods.go:89] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:36:59.345973 2126229 system_pods.go:86] 8 kube-system pods found
	I1222 01:36:59.346050 2126229 system_pods.go:89] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:36:59.346070 2126229 system_pods.go:89] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running
	I1222 01:36:59.346089 2126229 system_pods.go:89] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:36:59.346116 2126229 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running
	I1222 01:36:59.346122 2126229 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running
	I1222 01:36:59.346127 2126229 system_pods.go:89] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:36:59.346132 2126229 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running
	I1222 01:36:59.346149 2126229 system_pods.go:89] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:36:59.767575 2126229 system_pods.go:86] 8 kube-system pods found
	I1222 01:36:59.767614 2126229 system_pods.go:89] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Running
	I1222 01:36:59.767622 2126229 system_pods.go:89] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running
	I1222 01:36:59.767628 2126229 system_pods.go:89] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:36:59.767633 2126229 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running
	I1222 01:36:59.767637 2126229 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running
	I1222 01:36:59.767642 2126229 system_pods.go:89] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:36:59.767646 2126229 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running
	I1222 01:36:59.767694 2126229 system_pods.go:89] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Running
	I1222 01:36:59.767702 2126229 system_pods.go:126] duration metric: took 1.266101848s to wait for k8s-apps to be running ...
	I1222 01:36:59.767715 2126229 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:36:59.767783 2126229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:36:59.780813 2126229 system_svc.go:56] duration metric: took 13.089533ms WaitForService to wait for kubelet
	I1222 01:36:59.780841 2126229 kubeadm.go:587] duration metric: took 14.854461478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:36:59.780859 2126229 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:36:59.783660 2126229 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:36:59.783692 2126229 node_conditions.go:123] node cpu capacity is 2
	I1222 01:36:59.783704 2126229 node_conditions.go:105] duration metric: took 2.839717ms to run NodePressure ...
	I1222 01:36:59.783717 2126229 start.go:242] waiting for startup goroutines ...
	I1222 01:36:59.783725 2126229 start.go:247] waiting for cluster config update ...
	I1222 01:36:59.783736 2126229 start.go:256] writing updated cluster config ...
	I1222 01:36:59.784034 2126229 ssh_runner.go:195] Run: rm -f paused
	I1222 01:36:59.787826 2126229 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:36:59.791410 2126229 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xtnvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:36:59.796411 2126229 pod_ready.go:94] pod "coredns-66bc5c9577-xtnvg" is "Ready"
	I1222 01:36:59.796445 2126229 pod_ready.go:86] duration metric: took 5.012615ms for pod "coredns-66bc5c9577-xtnvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:36:59.798659 2126229 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:36:59.803239 2126229 pod_ready.go:94] pod "etcd-default-k8s-diff-port-733235" is "Ready"
	I1222 01:36:59.803311 2126229 pod_ready.go:86] duration metric: took 4.586397ms for pod "etcd-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:36:59.805980 2126229 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:36:59.810807 2126229 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-733235" is "Ready"
	I1222 01:36:59.810835 2126229 pod_ready.go:86] duration metric: took 4.827981ms for pod "kube-apiserver-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:36:59.812938 2126229 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:37:00.193686 2126229 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-733235" is "Ready"
	I1222 01:37:00.193717 2126229 pod_ready.go:86] duration metric: took 380.752278ms for pod "kube-controller-manager-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:37:00.392353 2126229 pod_ready.go:83] waiting for pod "kube-proxy-j6fr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:37:00.791373 2126229 pod_ready.go:94] pod "kube-proxy-j6fr2" is "Ready"
	I1222 01:37:00.791403 2126229 pod_ready.go:86] duration metric: took 399.022323ms for pod "kube-proxy-j6fr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:37:00.991775 2126229 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:37:01.392289 2126229 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-733235" is "Ready"
	I1222 01:37:01.392319 2126229 pod_ready.go:86] duration metric: took 400.515472ms for pod "kube-scheduler-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:37:01.392334 2126229 pod_ready.go:40] duration metric: took 1.604474451s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:37:01.451680 2126229 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:37:01.454840 2126229 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-733235" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:36:58 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:36:58.405539914Z" level=info msg="Created container 31f1f93871c4789d8cb618692f6e74f17feb0d5e4229f728bd52d4c80905b77c: kube-system/coredns-66bc5c9577-xtnvg/coredns" id=154ad688-b805-499f-a3cc-32136b353125 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:36:58 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:36:58.40640429Z" level=info msg="Starting container: 31f1f93871c4789d8cb618692f6e74f17feb0d5e4229f728bd52d4c80905b77c" id=1000deb1-fce5-4946-a057-c3b24703e65c name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:36:58 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:36:58.409957824Z" level=info msg="Started container" PID=1800 containerID=31f1f93871c4789d8cb618692f6e74f17feb0d5e4229f728bd52d4c80905b77c description=kube-system/coredns-66bc5c9577-xtnvg/coredns id=1000deb1-fce5-4946-a057-c3b24703e65c name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f0e93840e3f33e4f572ffa8eec61f0ffe84f91bd33c34ed76c8eeaaf88dda19
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.004426711Z" level=info msg="Running pod sandbox: default/busybox/POD" id=31a210ba-63fc-4b51-b70a-ef6a387c8b3a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.004515365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.012591568Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:878791d823c516dd269c3479914468d063cd57bcf77569775a879c492bd715f7 UID:924925f3-1e1b-431a-b6ed-5a473df1a8e8 NetNS:/var/run/netns/863ab37f-4167-4632-99f9-4749bc037826 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400007bbd8}] Aliases:map[]}"
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.012771436Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.025136453Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:878791d823c516dd269c3479914468d063cd57bcf77569775a879c492bd715f7 UID:924925f3-1e1b-431a-b6ed-5a473df1a8e8 NetNS:/var/run/netns/863ab37f-4167-4632-99f9-4749bc037826 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400007bbd8}] Aliases:map[]}"
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.025294331Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.028853912Z" level=info msg="Ran pod sandbox 878791d823c516dd269c3479914468d063cd57bcf77569775a879c492bd715f7 with infra container: default/busybox/POD" id=31a210ba-63fc-4b51-b70a-ef6a387c8b3a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.030137229Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fa2f631c-e253-4975-9c0e-f39613662469 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.030280543Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=fa2f631c-e253-4975-9c0e-f39613662469 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.030323537Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=fa2f631c-e253-4975-9c0e-f39613662469 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.031133465Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b288b8dc-c854-469a-931b-593fdd5ee837 name=/runtime.v1.ImageService/PullImage
	Dec 22 01:37:02 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:02.033610002Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.074103237Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b288b8dc-c854-469a-931b-593fdd5ee837 name=/runtime.v1.ImageService/PullImage
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.074741126Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0cfa5c31-bc78-4524-b36d-f96088353d6c name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.076357979Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29837402-f753-4d1f-a7c2-168f3fc9b090 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.084312709Z" level=info msg="Creating container: default/busybox/busybox" id=edd63e7a-415d-4c59-bc43-209d2d5c506a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.084459831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.089223839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.089776241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.11609282Z" level=info msg="Created container 67a89975fb20608507c8e7a574d53b4ab4e350c417483564e75eedb8fa2dbf4a: default/busybox/busybox" id=edd63e7a-415d-4c59-bc43-209d2d5c506a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.120953982Z" level=info msg="Starting container: 67a89975fb20608507c8e7a574d53b4ab4e350c417483564e75eedb8fa2dbf4a" id=d9555cd3-6801-4d11-a477-0166dc9a57cc name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:37:04 default-k8s-diff-port-733235 crio[840]: time="2025-12-22T01:37:04.124472144Z" level=info msg="Started container" PID=1854 containerID=67a89975fb20608507c8e7a574d53b4ab4e350c417483564e75eedb8fa2dbf4a description=default/busybox/busybox id=d9555cd3-6801-4d11-a477-0166dc9a57cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=878791d823c516dd269c3479914468d063cd57bcf77569775a879c492bd715f7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	67a89975fb206       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   878791d823c51       busybox                                                default
	31f1f93871c47       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   3f0e93840e3f3       coredns-66bc5c9577-xtnvg                               kube-system
	18edc47fdd844       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   aa2f02a6f0213       storage-provisioner                                    kube-system
	428b981f479a7       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   aa792783b0a32       kindnet-tnxxl                                          kube-system
	0fdcce7059822       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                      26 seconds ago      Running             kube-proxy                0                   908b345a3fdd2       kube-proxy-j6fr2                                       kube-system
	0636dee03dfda       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                      38 seconds ago      Running             kube-scheduler            0                   b7d895e2c235b       kube-scheduler-default-k8s-diff-port-733235            kube-system
	fe6016919d4ad       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                      38 seconds ago      Running             kube-apiserver            0                   27c518ca5d762       kube-apiserver-default-k8s-diff-port-733235            kube-system
	6ee9dee090f76       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      38 seconds ago      Running             etcd                      0                   13af678059e35       etcd-default-k8s-diff-port-733235                      kube-system
	649d6daad7d9c       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                      38 seconds ago      Running             kube-controller-manager   0                   b3a3383d1bd6d       kube-controller-manager-default-k8s-diff-port-733235   kube-system
	
	
	==> coredns [31f1f93871c4789d8cb618692f6e74f17feb0d5e4229f728bd52d4c80905b77c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43325 - 35201 "HINFO IN 7821206648777689818.1984055300139855488. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012201814s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-733235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-733235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=default-k8s-diff-port-733235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_36_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-733235
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:37:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:37:09 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:37:09 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:37:09 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:37:09 +0000   Mon, 22 Dec 2025 01:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-733235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                6fde222b-2b74-47ec-8d2f-d10c695b4327
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-xtnvg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-default-k8s-diff-port-733235                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-tnxxl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-733235             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-733235    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-j6fr2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-733235             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 40s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  39s (x8 over 40s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s (x8 over 40s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s (x8 over 40s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node default-k8s-diff-port-733235 event: Registered Node default-k8s-diff-port-733235 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-733235 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec22 01:06] overlayfs: idmapped layers are currently not supported
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6ee9dee090f76d4a9faf1826edb5002bf70bdb517cf04210f8b72c958d6cb386] <==
	{"level":"warn","ts":"2025-12-22T01:36:35.201012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.219145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.241779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.260792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.277371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.287741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.313432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.328404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.342964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.379224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.396882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.408988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.439290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.442687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.465855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.482040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.514424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.519131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.534382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.552202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.574189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.596610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.624455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.642648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:36:35.741447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43952","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:37:11 up 1 day,  8:19,  0 user,  load average: 2.11, 2.69, 2.42
	Linux default-k8s-diff-port-733235 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [428b981f479a7fa679ce51dae3bf28c97328e64c1b6f548b45638319bf6fe4d3] <==
	I1222 01:36:47.332602       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:36:47.332818       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:36:47.332942       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:36:47.333002       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:36:47.333037       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:36:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:36:47.529716       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:36:47.529749       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:36:47.529759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:36:47.529864       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1222 01:36:47.830800       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:36:47.830831       1 metrics.go:72] Registering metrics
	I1222 01:36:47.830891       1 controller.go:711] "Syncing nftables rules"
	I1222 01:36:57.533937       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:36:57.534054       1 main.go:301] handling current node
	I1222 01:37:07.530394       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:37:07.530470       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fe6016919d4adb595715dce9b8bc64692104686835fc7674adfb6cdecdc4bdb6] <==
	I1222 01:36:36.713501       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1222 01:36:36.714736       1 controller.go:667] quota admission added evaluator for: namespaces
	I1222 01:36:36.733912       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:36:36.734388       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1222 01:36:36.738116       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:36:36.738489       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1222 01:36:36.902201       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:36:37.366080       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1222 01:36:37.382426       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1222 01:36:37.382528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:36:38.091065       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:36:38.143862       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:36:38.263852       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1222 01:36:38.271497       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1222 01:36:38.272640       1 controller.go:667] quota admission added evaluator for: endpoints
	I1222 01:36:38.279630       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:36:38.585968       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1222 01:36:39.109013       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1222 01:36:39.123903       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1222 01:36:39.140581       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1222 01:36:43.790543       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1222 01:36:43.993526       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:36:44.004438       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:36:44.010587       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:36:44.394158       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [649d6daad7d9ca4b5f4293c4e62d34bfe078a83b4d82bb3bdec44de0c546e122] <==
	I1222 01:36:43.633302       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1222 01:36:43.633364       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1222 01:36:43.633811       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1222 01:36:43.635181       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1222 01:36:43.635357       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1222 01:36:43.635404       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1222 01:36:43.635901       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1222 01:36:43.635947       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1222 01:36:43.636301       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1222 01:36:43.636356       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1222 01:36:43.636839       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1222 01:36:43.636884       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1222 01:36:43.636899       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1222 01:36:43.636928       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1222 01:36:43.637582       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1222 01:36:43.639265       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1222 01:36:43.639370       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1222 01:36:43.642540       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:36:43.646894       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:36:43.649046       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1222 01:36:43.668665       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:36:43.668759       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1222 01:36:43.668791       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1222 01:36:43.672225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:36:58.635388       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0fdcce7059822489f2ce471e667b0930b7a89ac88bc16b24a869f41bd9953228] <==
	I1222 01:36:45.075437       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:36:45.306912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:36:45.407729       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:36:45.407773       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1222 01:36:45.407840       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:36:45.468757       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:36:45.468895       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:36:45.474731       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:36:45.478008       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:36:45.478033       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:36:45.488127       1 config.go:200] "Starting service config controller"
	I1222 01:36:45.488146       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:36:45.488166       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:36:45.488180       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:36:45.488192       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:36:45.488196       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:36:45.488899       1 config.go:309] "Starting node config controller"
	I1222 01:36:45.488908       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:36:45.488914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:36:45.588723       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1222 01:36:45.588760       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:36:45.588814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0636dee03dfda1cbc08be0c4bfd0243d88915c991e7d5918168f9592da276602] <==
	E1222 01:36:36.643678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1222 01:36:36.643737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1222 01:36:36.643816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1222 01:36:36.643865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 01:36:36.643910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1222 01:36:36.650343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1222 01:36:36.650536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1222 01:36:36.643962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1222 01:36:36.650955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 01:36:36.651028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1222 01:36:36.651075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1222 01:36:36.651132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1222 01:36:36.651197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1222 01:36:36.651283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1222 01:36:36.651363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1222 01:36:36.651432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1222 01:36:36.651497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1222 01:36:36.652586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1222 01:36:37.544111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1222 01:36:37.546541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1222 01:36:37.593563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1222 01:36:37.600822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1222 01:36:37.764831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1222 01:36:37.800652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1222 01:36:38.344089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 22 01:36:40 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:40.297226    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-733235" podStartSLOduration=1.297209987 podStartE2EDuration="1.297209987s" podCreationTimestamp="2025-12-22 01:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:36:40.275376497 +0000 UTC m=+1.323830589" watchObservedRunningTime="2025-12-22 01:36:40.297209987 +0000 UTC m=+1.345664071"
	Dec 22 01:36:43 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:43.668503    1322 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 22 01:36:43 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:43.669637    1322 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.544211    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ae022fe-4121-4114-951f-be8d91fa4da1-lib-modules\") pod \"kindnet-tnxxl\" (UID: \"0ae022fe-4121-4114-951f-be8d91fa4da1\") " pod="kube-system/kindnet-tnxxl"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.544289    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88ac4eac-fa44-4275-b221-2f761c9d3dd3-xtables-lock\") pod \"kube-proxy-j6fr2\" (UID: \"88ac4eac-fa44-4275-b221-2f761c9d3dd3\") " pod="kube-system/kube-proxy-j6fr2"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.544311    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88ac4eac-fa44-4275-b221-2f761c9d3dd3-kube-proxy\") pod \"kube-proxy-j6fr2\" (UID: \"88ac4eac-fa44-4275-b221-2f761c9d3dd3\") " pod="kube-system/kube-proxy-j6fr2"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.544331    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ae022fe-4121-4114-951f-be8d91fa4da1-xtables-lock\") pod \"kindnet-tnxxl\" (UID: \"0ae022fe-4121-4114-951f-be8d91fa4da1\") " pod="kube-system/kindnet-tnxxl"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.544349    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6twzn\" (UniqueName: \"kubernetes.io/projected/0ae022fe-4121-4114-951f-be8d91fa4da1-kube-api-access-6twzn\") pod \"kindnet-tnxxl\" (UID: \"0ae022fe-4121-4114-951f-be8d91fa4da1\") " pod="kube-system/kindnet-tnxxl"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.544371    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnr7q\" (UniqueName: \"kubernetes.io/projected/88ac4eac-fa44-4275-b221-2f761c9d3dd3-kube-api-access-xnr7q\") pod \"kube-proxy-j6fr2\" (UID: \"88ac4eac-fa44-4275-b221-2f761c9d3dd3\") " pod="kube-system/kube-proxy-j6fr2"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.544397    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0ae022fe-4121-4114-951f-be8d91fa4da1-cni-cfg\") pod \"kindnet-tnxxl\" (UID: \"0ae022fe-4121-4114-951f-be8d91fa4da1\") " pod="kube-system/kindnet-tnxxl"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.544419    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88ac4eac-fa44-4275-b221-2f761c9d3dd3-lib-modules\") pod \"kube-proxy-j6fr2\" (UID: \"88ac4eac-fa44-4275-b221-2f761c9d3dd3\") " pod="kube-system/kube-proxy-j6fr2"
	Dec 22 01:36:44 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:44.666782    1322 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 22 01:36:45 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:45.318148    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j6fr2" podStartSLOduration=1.318123383 podStartE2EDuration="1.318123383s" podCreationTimestamp="2025-12-22 01:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:36:45.304934563 +0000 UTC m=+6.353388655" watchObservedRunningTime="2025-12-22 01:36:45.318123383 +0000 UTC m=+6.366577475"
	Dec 22 01:36:47 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:47.399257    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tnxxl" podStartSLOduration=0.897273561 podStartE2EDuration="3.399238733s" podCreationTimestamp="2025-12-22 01:36:44 +0000 UTC" firstStartedPulling="2025-12-22 01:36:44.755559992 +0000 UTC m=+5.804014076" lastFinishedPulling="2025-12-22 01:36:47.257525156 +0000 UTC m=+8.305979248" observedRunningTime="2025-12-22 01:36:47.328902224 +0000 UTC m=+8.377356324" watchObservedRunningTime="2025-12-22 01:36:47.399238733 +0000 UTC m=+8.447692817"
	Dec 22 01:36:57 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:57.970109    1322 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 22 01:36:58 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:58.059834    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79571637-a410-4e69-a9f6-6fb2d8a9fa55-config-volume\") pod \"coredns-66bc5c9577-xtnvg\" (UID: \"79571637-a410-4e69-a9f6-6fb2d8a9fa55\") " pod="kube-system/coredns-66bc5c9577-xtnvg"
	Dec 22 01:36:58 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:58.060104    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/12ef3141-54f6-41b1-abd5-2dc491507ff8-tmp\") pod \"storage-provisioner\" (UID: \"12ef3141-54f6-41b1-abd5-2dc491507ff8\") " pod="kube-system/storage-provisioner"
	Dec 22 01:36:58 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:58.060159    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9swjg\" (UniqueName: \"kubernetes.io/projected/12ef3141-54f6-41b1-abd5-2dc491507ff8-kube-api-access-9swjg\") pod \"storage-provisioner\" (UID: \"12ef3141-54f6-41b1-abd5-2dc491507ff8\") " pod="kube-system/storage-provisioner"
	Dec 22 01:36:58 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:58.060186    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktzdx\" (UniqueName: \"kubernetes.io/projected/79571637-a410-4e69-a9f6-6fb2d8a9fa55-kube-api-access-ktzdx\") pod \"coredns-66bc5c9577-xtnvg\" (UID: \"79571637-a410-4e69-a9f6-6fb2d8a9fa55\") " pod="kube-system/coredns-66bc5c9577-xtnvg"
	Dec 22 01:36:58 default-k8s-diff-port-733235 kubelet[1322]: W1222 01:36:58.319895    1322 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/crio-aa2f02a6f0213a5a4b33fc04849bc9c4e55f7f3a0767d8407d6bd8cbe133776b WatchSource:0}: Error finding container aa2f02a6f0213a5a4b33fc04849bc9c4e55f7f3a0767d8407d6bd8cbe133776b: Status 404 returned error can't find the container with id aa2f02a6f0213a5a4b33fc04849bc9c4e55f7f3a0767d8407d6bd8cbe133776b
	Dec 22 01:36:58 default-k8s-diff-port-733235 kubelet[1322]: W1222 01:36:58.363584    1322 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/crio-3f0e93840e3f33e4f572ffa8eec61f0ffe84f91bd33c34ed76c8eeaaf88dda19 WatchSource:0}: Error finding container 3f0e93840e3f33e4f572ffa8eec61f0ffe84f91bd33c34ed76c8eeaaf88dda19: Status 404 returned error can't find the container with id 3f0e93840e3f33e4f572ffa8eec61f0ffe84f91bd33c34ed76c8eeaaf88dda19
	Dec 22 01:36:59 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:36:59.374737    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.374716284 podStartE2EDuration="13.374716284s" podCreationTimestamp="2025-12-22 01:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:36:59.36091337 +0000 UTC m=+20.409367462" watchObservedRunningTime="2025-12-22 01:36:59.374716284 +0000 UTC m=+20.423170368"
	Dec 22 01:37:01 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:37:01.691960    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xtnvg" podStartSLOduration=17.691935854 podStartE2EDuration="17.691935854s" podCreationTimestamp="2025-12-22 01:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-22 01:36:59.375460721 +0000 UTC m=+20.423914912" watchObservedRunningTime="2025-12-22 01:37:01.691935854 +0000 UTC m=+22.740389937"
	Dec 22 01:37:01 default-k8s-diff-port-733235 kubelet[1322]: I1222 01:37:01.789165    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kl7q\" (UniqueName: \"kubernetes.io/projected/924925f3-1e1b-431a-b6ed-5a473df1a8e8-kube-api-access-6kl7q\") pod \"busybox\" (UID: \"924925f3-1e1b-431a-b6ed-5a473df1a8e8\") " pod="default/busybox"
	Dec 22 01:37:02 default-k8s-diff-port-733235 kubelet[1322]: W1222 01:37:02.027228    1322 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/crio-878791d823c516dd269c3479914468d063cd57bcf77569775a879c492bd715f7 WatchSource:0}: Error finding container 878791d823c516dd269c3479914468d063cd57bcf77569775a879c492bd715f7: Status 404 returned error can't find the container with id 878791d823c516dd269c3479914468d063cd57bcf77569775a879c492bd715f7
	
	
	==> storage-provisioner [18edc47fdd8441f4ef995ff3a266d8a8b00c19b666fb57f1ca337270ba4963d4] <==
	I1222 01:36:58.399955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:36:58.418046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:36:58.418189       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1222 01:36:58.426058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:58.435943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:36:58.436118       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:36:58.436291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733235_381e7b9f-521d-490b-902c-489fd4c2ce29!
	I1222 01:36:58.436893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11324e73-4990-4f66-8f18-92ad4f625ab0", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-733235_381e7b9f-521d-490b-902c-489fd4c2ce29 became leader
	W1222 01:36:58.445495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:36:58.473662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:36:58.537512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733235_381e7b9f-521d-490b-902c-489fd4c2ce29!
	W1222 01:37:00.477122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:00.481933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:02.484881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:02.489069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:04.492364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:04.496955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:06.499470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:06.505964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:08.510000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:08.514345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:10.517081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:37:10.521317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-733235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-733235 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-733235 --alsologtostderr -v=1: exit status 80 (2.055447928s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-733235 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:38:25.507600 2132880 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:25.507815 2132880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:25.507848 2132880 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:25.507866 2132880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:25.508128 2132880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:38:25.508419 2132880 out.go:368] Setting JSON to false
	I1222 01:38:25.508468 2132880 mustload.go:66] Loading cluster: default-k8s-diff-port-733235
	I1222 01:38:25.508904 2132880 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:38:25.509453 2132880 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:38:25.530673 2132880 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:38:25.531001 2132880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:25.595371 2132880 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-22 01:38:25.585821678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:25.596068 2132880 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-733235 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantup
datenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1222 01:38:25.601286 2132880 out.go:179] * Pausing node default-k8s-diff-port-733235 ... 
	I1222 01:38:25.604083 2132880 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:38:25.604433 2132880 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:25.604488 2132880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:38:25.622174 2132880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:38:25.715842 2132880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:38:25.732065 2132880 pause.go:52] kubelet running: true
	I1222 01:38:25.732153 2132880 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:38:26.008070 2132880 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:38:26.008185 2132880 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:38:26.098217 2132880 cri.go:96] found id: "f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01"
	I1222 01:38:26.098241 2132880 cri.go:96] found id: "893aacc74e612f41dc6362e747257f3761995b16d65649fdccd3803289f042d2"
	I1222 01:38:26.098245 2132880 cri.go:96] found id: "1ee0cbd88ee07483ade1428ffe942035401854e6ac8f7f373724b892d15f0bf1"
	I1222 01:38:26.098249 2132880 cri.go:96] found id: "69c3f005ed94b1ccf31b7a27d3116e7ae80725f8604454ff4ed94158879868d6"
	I1222 01:38:26.098252 2132880 cri.go:96] found id: "ed7b1ae74975081543638e2810948f943f99e282f8acef543825b48f8f02bb31"
	I1222 01:38:26.098256 2132880 cri.go:96] found id: "905af8f5658bbafff6fbdf5bb6452d014ca2090ce3f5e40e90cbd56785d1aad5"
	I1222 01:38:26.098259 2132880 cri.go:96] found id: "be7fb25ee13836e5779902fd877f2186e3cf9a31a21929cf6abc242f36e24232"
	I1222 01:38:26.098263 2132880 cri.go:96] found id: "d67aa42f388cdf6681fda3c9278147bee06c272bb914a742bf8d4661a93d37f4"
	I1222 01:38:26.098266 2132880 cri.go:96] found id: "0c4e1557aaf4ba6f8dd79484c5a6fdd993697b4d3a48944d92da7868590e39e8"
	I1222 01:38:26.098273 2132880 cri.go:96] found id: "c0036ba277be0d901db9cdb64b5a6e5e9b908f9c3bdb99057b6d375471fb4d12"
	I1222 01:38:26.098276 2132880 cri.go:96] found id: "234dc7879d1d2f4481efdf1307fa02e5e13fdc950270816a19c3e22111ef36f0"
	I1222 01:38:26.098279 2132880 cri.go:96] found id: ""
	I1222 01:38:26.098331 2132880 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:38:26.112450 2132880 retry.go:84] will retry after 400ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:38:26Z" level=error msg="open /run/runc: no such file or directory"
	I1222 01:38:26.477044 2132880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:38:26.490173 2132880 pause.go:52] kubelet running: false
	I1222 01:38:26.490238 2132880 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:38:26.666976 2132880 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:38:26.667055 2132880 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:38:26.740615 2132880 cri.go:96] found id: "f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01"
	I1222 01:38:26.740673 2132880 cri.go:96] found id: "893aacc74e612f41dc6362e747257f3761995b16d65649fdccd3803289f042d2"
	I1222 01:38:26.740692 2132880 cri.go:96] found id: "1ee0cbd88ee07483ade1428ffe942035401854e6ac8f7f373724b892d15f0bf1"
	I1222 01:38:26.740709 2132880 cri.go:96] found id: "69c3f005ed94b1ccf31b7a27d3116e7ae80725f8604454ff4ed94158879868d6"
	I1222 01:38:26.740726 2132880 cri.go:96] found id: "ed7b1ae74975081543638e2810948f943f99e282f8acef543825b48f8f02bb31"
	I1222 01:38:26.740742 2132880 cri.go:96] found id: "905af8f5658bbafff6fbdf5bb6452d014ca2090ce3f5e40e90cbd56785d1aad5"
	I1222 01:38:26.740757 2132880 cri.go:96] found id: "be7fb25ee13836e5779902fd877f2186e3cf9a31a21929cf6abc242f36e24232"
	I1222 01:38:26.740773 2132880 cri.go:96] found id: "d67aa42f388cdf6681fda3c9278147bee06c272bb914a742bf8d4661a93d37f4"
	I1222 01:38:26.740796 2132880 cri.go:96] found id: "0c4e1557aaf4ba6f8dd79484c5a6fdd993697b4d3a48944d92da7868590e39e8"
	I1222 01:38:26.740815 2132880 cri.go:96] found id: "c0036ba277be0d901db9cdb64b5a6e5e9b908f9c3bdb99057b6d375471fb4d12"
	I1222 01:38:26.740831 2132880 cri.go:96] found id: "234dc7879d1d2f4481efdf1307fa02e5e13fdc950270816a19c3e22111ef36f0"
	I1222 01:38:26.740857 2132880 cri.go:96] found id: ""
	I1222 01:38:26.740930 2132880 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:38:27.208355 2132880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:38:27.221414 2132880 pause.go:52] kubelet running: false
	I1222 01:38:27.221503 2132880 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1222 01:38:27.402298 2132880 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1222 01:38:27.402391 2132880 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1222 01:38:27.473026 2132880 cri.go:96] found id: "f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01"
	I1222 01:38:27.473064 2132880 cri.go:96] found id: "893aacc74e612f41dc6362e747257f3761995b16d65649fdccd3803289f042d2"
	I1222 01:38:27.473070 2132880 cri.go:96] found id: "1ee0cbd88ee07483ade1428ffe942035401854e6ac8f7f373724b892d15f0bf1"
	I1222 01:38:27.473075 2132880 cri.go:96] found id: "69c3f005ed94b1ccf31b7a27d3116e7ae80725f8604454ff4ed94158879868d6"
	I1222 01:38:27.473078 2132880 cri.go:96] found id: "ed7b1ae74975081543638e2810948f943f99e282f8acef543825b48f8f02bb31"
	I1222 01:38:27.473082 2132880 cri.go:96] found id: "905af8f5658bbafff6fbdf5bb6452d014ca2090ce3f5e40e90cbd56785d1aad5"
	I1222 01:38:27.473086 2132880 cri.go:96] found id: "be7fb25ee13836e5779902fd877f2186e3cf9a31a21929cf6abc242f36e24232"
	I1222 01:38:27.473089 2132880 cri.go:96] found id: "d67aa42f388cdf6681fda3c9278147bee06c272bb914a742bf8d4661a93d37f4"
	I1222 01:38:27.473092 2132880 cri.go:96] found id: "0c4e1557aaf4ba6f8dd79484c5a6fdd993697b4d3a48944d92da7868590e39e8"
	I1222 01:38:27.473098 2132880 cri.go:96] found id: "c0036ba277be0d901db9cdb64b5a6e5e9b908f9c3bdb99057b6d375471fb4d12"
	I1222 01:38:27.473106 2132880 cri.go:96] found id: "234dc7879d1d2f4481efdf1307fa02e5e13fdc950270816a19c3e22111ef36f0"
	I1222 01:38:27.473110 2132880 cri.go:96] found id: ""
	I1222 01:38:27.473160 2132880 ssh_runner.go:195] Run: sudo runc list -f json
	I1222 01:38:27.488848 2132880 out.go:203] 
	W1222 01:38:27.491674 2132880 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:38:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:38:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1222 01:38:27.491696 2132880 out.go:285] * 
	* 
	W1222 01:38:27.510440 2132880 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:38:27.513283 2132880 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-733235 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-733235
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-733235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269",
	        "Created": "2025-12-22T01:36:14.986448621Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2130274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:37:24.80441533Z",
	            "FinishedAt": "2025-12-22T01:37:23.933145342Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/hostname",
	        "HostsPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/hosts",
	        "LogPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269-json.log",
	        "Name": "/default-k8s-diff-port-733235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-733235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-733235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269",
	                "LowerDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-733235",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-733235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-733235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-733235",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-733235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c1c603c7b3a47f992315171ef58e621f205cfd83798149c1c2bae2d062f24f4",
	            "SandboxKey": "/var/run/docker/netns/4c1c603c7b3a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38771"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38772"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38775"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38773"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38774"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-733235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:b2:35:a7:fe:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "587a11d4872e57b94e293749bb81cafed3a21b4741fd9542ba2b75e1a005d466",
	                    "EndpointID": "1a3cf427d9ac22323ac30b190f2274ebacaa2b76037a30227e3eae7853bda982",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-733235",
	                        "7c78b7294c07"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235: exit status 2 (332.975607ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-733235 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-733235 logs -n 25: (1.262307548s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-031738       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ delete  │ -p cert-expiration-031738                                                                                                                                                                                                                     │ cert-expiration-031738       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                               │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                   │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                         │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                         │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                               │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:37:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:37:24.520921 2130151 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:37:24.521104 2130151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:37:24.521136 2130151 out.go:374] Setting ErrFile to fd 2...
	I1222 01:37:24.521155 2130151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:37:24.521474 2130151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:37:24.521883 2130151 out.go:368] Setting JSON to false
	I1222 01:37:24.522804 2130151 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116394,"bootTime":1766251051,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:37:24.522895 2130151 start.go:143] virtualization:  
	I1222 01:37:24.526109 2130151 out.go:179] * [default-k8s-diff-port-733235] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:37:24.530025 2130151 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:37:24.530096 2130151 notify.go:221] Checking for updates...
	I1222 01:37:24.534202 2130151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:37:24.536991 2130151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:37:24.539877 2130151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:37:24.542759 2130151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:37:24.545567 2130151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:37:24.548746 2130151 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:37:24.549489 2130151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:37:24.579714 2130151 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:37:24.579829 2130151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:37:24.648678 2130151 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:37:24.63935489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:37:24.648784 2130151 docker.go:319] overlay module found
	I1222 01:37:24.651964 2130151 out.go:179] * Using the docker driver based on existing profile
	I1222 01:37:24.654812 2130151 start.go:309] selected driver: docker
	I1222 01:37:24.654835 2130151 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:37:24.654935 2130151 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:37:24.655664 2130151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:37:24.721952 2130151 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:37:24.71238698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:37:24.722284 2130151 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:37:24.722316 2130151 cni.go:84] Creating CNI manager for ""
	I1222 01:37:24.722365 2130151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:37:24.722410 2130151 start.go:353] cluster config:
	{Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:37:24.725666 2130151 out.go:179] * Starting "default-k8s-diff-port-733235" primary control-plane node in "default-k8s-diff-port-733235" cluster
	I1222 01:37:24.728575 2130151 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:37:24.731444 2130151 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:37:24.734222 2130151 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:37:24.734269 2130151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:37:24.734281 2130151 cache.go:65] Caching tarball of preloaded images
	I1222 01:37:24.734280 2130151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:37:24.734362 2130151 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:37:24.734372 2130151 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:37:24.734481 2130151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/config.json ...
	I1222 01:37:24.753888 2130151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:37:24.753912 2130151 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:37:24.753926 2130151 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:37:24.753971 2130151 start.go:360] acquireMachinesLock for default-k8s-diff-port-733235: {Name:mk42f93f802a61825395428c6e75b6089866c8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:37:24.754033 2130151 start.go:364] duration metric: took 37.931µs to acquireMachinesLock for "default-k8s-diff-port-733235"
	I1222 01:37:24.754058 2130151 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:37:24.754068 2130151 fix.go:54] fixHost starting: 
	I1222 01:37:24.754330 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:24.771097 2130151 fix.go:112] recreateIfNeeded on default-k8s-diff-port-733235: state=Stopped err=<nil>
	W1222 01:37:24.771128 2130151 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:37:24.445159 2113859 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000649464s
	I1222 01:37:24.445188 2113859 kubeadm.go:319] 
	I1222 01:37:24.445245 2113859 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:37:24.445278 2113859 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:37:24.445510 2113859 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:37:24.445521 2113859 kubeadm.go:319] 
	I1222 01:37:24.445625 2113859 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:37:24.445657 2113859 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:37:24.445688 2113859 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:37:24.445692 2113859 kubeadm.go:319] 
	I1222 01:37:24.449085 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:37:24.449561 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:37:24.449681 2113859 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:37:24.449965 2113859 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:37:24.449971 2113859 kubeadm.go:319] 
	I1222 01:37:24.450044 2113859 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:37:24.450217 2113859 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000649464s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:37:24.450303 2113859 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 01:37:24.890509 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:37:24.915976 2113859 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:37:24.916036 2113859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:37:24.926777 2113859 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:37:24.926795 2113859 kubeadm.go:158] found existing configuration files:
	
	I1222 01:37:24.926846 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:37:24.935332 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:37:24.935391 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:37:24.943361 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:37:24.952220 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:37:24.952286 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:37:24.960609 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:37:24.969890 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:37:24.969948 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:37:24.978647 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:37:24.987603 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:37:24.987662 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:37:24.999818 2113859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:37:25.076514 2113859 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:37:25.087372 2113859 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:37:25.206650 2113859 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:37:25.206722 2113859 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:37:25.206758 2113859 kubeadm.go:319] OS: Linux
	I1222 01:37:25.206804 2113859 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:37:25.206853 2113859 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:37:25.206900 2113859 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:37:25.206949 2113859 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:37:25.206997 2113859 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:37:25.207051 2113859 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:37:25.207097 2113859 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:37:25.207145 2113859 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:37:25.207191 2113859 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:37:25.290369 2113859 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:37:25.290485 2113859 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:37:25.290577 2113859 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:37:25.313723 2113859 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:37:25.319101 2113859 out.go:252]   - Generating certificates and keys ...
	I1222 01:37:25.319194 2113859 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:37:25.319264 2113859 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:37:25.319343 2113859 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:37:25.319409 2113859 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:37:25.319483 2113859 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:37:25.319543 2113859 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:37:25.319610 2113859 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:37:25.319675 2113859 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:37:25.319755 2113859 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:37:25.319830 2113859 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:37:25.320128 2113859 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:37:25.320259 2113859 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:37:25.579598 2113859 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:37:26.290496 2113859 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:37:26.459743 2113859 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:37:26.687849 2113859 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:37:27.003327 2113859 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:37:27.003462 2113859 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:37:27.005755 2113859 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:37:27.009338 2113859 out.go:252]   - Booting up control plane ...
	I1222 01:37:27.009453 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:37:27.009975 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:37:27.011908 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:37:27.027550 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:37:27.027957 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:37:27.035510 2113859 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:37:27.035816 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:37:27.035861 2113859 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:37:27.169794 2113859 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:37:27.169917 2113859 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:37:24.774213 2130151 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-733235" ...
	I1222 01:37:24.774300 2130151 cli_runner.go:164] Run: docker start default-k8s-diff-port-733235
	I1222 01:37:25.035402 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:25.059750 2130151 kic.go:430] container "default-k8s-diff-port-733235" state is running.
	I1222 01:37:25.060323 2130151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:37:25.087238 2130151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/config.json ...
	I1222 01:37:25.087466 2130151 machine.go:94] provisionDockerMachine start ...
	I1222 01:37:25.087534 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:25.106881 2130151 main.go:144] libmachine: Using SSH client type: native
	I1222 01:37:25.107215 2130151 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38771 <nil> <nil>}
	I1222 01:37:25.107244 2130151 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:37:25.107874 2130151 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49652->127.0.0.1:38771: read: connection reset by peer
	I1222 01:37:28.240802 2130151 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733235
	
	I1222 01:37:28.240828 2130151 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-733235"
	I1222 01:37:28.240900 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:28.258108 2130151 main.go:144] libmachine: Using SSH client type: native
	I1222 01:37:28.258418 2130151 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38771 <nil> <nil>}
	I1222 01:37:28.258435 2130151 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733235 && echo "default-k8s-diff-port-733235" | sudo tee /etc/hostname
	I1222 01:37:28.401917 2130151 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733235
	
	I1222 01:37:28.402068 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:28.419290 2130151 main.go:144] libmachine: Using SSH client type: native
	I1222 01:37:28.419607 2130151 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38771 <nil> <nil>}
	I1222 01:37:28.419625 2130151 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733235/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:37:28.549530 2130151 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:37:28.549556 2130151 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:37:28.549584 2130151 ubuntu.go:190] setting up certificates
	I1222 01:37:28.549605 2130151 provision.go:84] configureAuth start
	I1222 01:37:28.549682 2130151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:37:28.569707 2130151 provision.go:143] copyHostCerts
	I1222 01:37:28.569777 2130151 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:37:28.569800 2130151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:37:28.569880 2130151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:37:28.569981 2130151 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:37:28.569991 2130151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:37:28.570017 2130151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:37:28.570071 2130151 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:37:28.570080 2130151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:37:28.570104 2130151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:37:28.570192 2130151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733235 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-733235 localhost minikube]
	I1222 01:37:28.689451 2130151 provision.go:177] copyRemoteCerts
	I1222 01:37:28.689577 2130151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:37:28.689650 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:28.706730 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:28.800993 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:37:28.817800 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1222 01:37:28.834969 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:37:28.851928 2130151 provision.go:87] duration metric: took 302.295966ms to configureAuth
	I1222 01:37:28.851997 2130151 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:37:28.852215 2130151 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:37:28.852351 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:28.869373 2130151 main.go:144] libmachine: Using SSH client type: native
	I1222 01:37:28.869687 2130151 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38771 <nil> <nil>}
	I1222 01:37:28.869710 2130151 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:37:29.211321 2130151 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:37:29.211343 2130151 machine.go:97] duration metric: took 4.123860423s to provisionDockerMachine
	I1222 01:37:29.211354 2130151 start.go:293] postStartSetup for "default-k8s-diff-port-733235" (driver="docker")
	I1222 01:37:29.211364 2130151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:37:29.211422 2130151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:37:29.211471 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:29.229034 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:29.325252 2130151 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:37:29.328582 2130151 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:37:29.328614 2130151 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:37:29.328625 2130151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:37:29.328677 2130151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:37:29.328765 2130151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:37:29.328866 2130151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:37:29.336175 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:37:29.355719 2130151 start.go:296] duration metric: took 144.351069ms for postStartSetup
	I1222 01:37:29.355841 2130151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:37:29.355929 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:29.375688 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:29.470411 2130151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:37:29.475240 2130151 fix.go:56] duration metric: took 4.721166126s for fixHost
	I1222 01:37:29.475265 2130151 start.go:83] releasing machines lock for "default-k8s-diff-port-733235", held for 4.721217693s
	I1222 01:37:29.475351 2130151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:37:29.492521 2130151 ssh_runner.go:195] Run: cat /version.json
	I1222 01:37:29.492571 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:29.492609 2130151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:37:29.492668 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:29.513441 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:29.514261 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:29.705168 2130151 ssh_runner.go:195] Run: systemctl --version
	I1222 01:37:29.711330 2130151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:37:29.746777 2130151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:37:29.751022 2130151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:37:29.751094 2130151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:37:29.758432 2130151 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:37:29.758456 2130151 start.go:496] detecting cgroup driver to use...
	I1222 01:37:29.758512 2130151 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:37:29.758564 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:37:29.773449 2130151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:37:29.786043 2130151 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:37:29.786131 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:37:29.801449 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:37:29.814427 2130151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:37:29.931330 2130151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:37:30.052296 2130151 docker.go:234] disabling docker service ...
	I1222 01:37:30.052373 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:37:30.069405 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:37:30.082905 2130151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:37:30.221538 2130151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:37:30.337482 2130151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:37:30.350440 2130151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:37:30.364189 2130151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:37:30.364272 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.373292 2130151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:37:30.373397 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.382333 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.391178 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.399958 2130151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:37:30.408024 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.416948 2130151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.425658 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.434526 2130151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:37:30.442378 2130151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:37:30.450115 2130151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:37:30.560535 2130151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:37:30.743873 2130151 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:37:30.743985 2130151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:37:30.747907 2130151 start.go:564] Will wait 60s for crictl version
	I1222 01:37:30.747986 2130151 ssh_runner.go:195] Run: which crictl
	I1222 01:37:30.751442 2130151 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:37:30.775633 2130151 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:37:30.775715 2130151 ssh_runner.go:195] Run: crio --version
	I1222 01:37:30.807550 2130151 ssh_runner.go:195] Run: crio --version
	I1222 01:37:30.843817 2130151 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:37:30.846644 2130151 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-733235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:37:30.867154 2130151 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:37:30.871473 2130151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:37:30.881795 2130151 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:37:30.881929 2130151 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:37:30.881990 2130151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:37:30.920370 2130151 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:37:30.920398 2130151 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:37:30.920461 2130151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:37:30.949936 2130151 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:37:30.949961 2130151 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:37:30.949970 2130151 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.3 crio true true} ...
	I1222 01:37:30.950078 2130151 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-733235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:37:30.950163 2130151 ssh_runner.go:195] Run: crio config
	I1222 01:37:31.015583 2130151 cni.go:84] Creating CNI manager for ""
	I1222 01:37:31.015609 2130151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:37:31.015631 2130151 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:37:31.015655 2130151 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733235 NodeName:default-k8s-diff-port-733235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:37:31.015788 2130151 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733235"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:37:31.015875 2130151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:37:31.023758 2130151 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:37:31.023849 2130151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:37:31.031328 2130151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1222 01:37:31.043495 2130151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:37:31.056679 2130151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1222 01:37:31.069545 2130151 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:37:31.073244 2130151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:37:31.083166 2130151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:37:31.197352 2130151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:37:31.219237 2130151 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235 for IP: 192.168.85.2
	I1222 01:37:31.219270 2130151 certs.go:195] generating shared ca certs ...
	I1222 01:37:31.219288 2130151 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:37:31.219445 2130151 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:37:31.219501 2130151 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:37:31.219512 2130151 certs.go:257] generating profile certs ...
	I1222 01:37:31.219603 2130151 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.key
	I1222 01:37:31.219687 2130151 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key.be3f76d2
	I1222 01:37:31.219741 2130151 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.key
	I1222 01:37:31.219864 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:37:31.219923 2130151 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:37:31.219937 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:37:31.219976 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:37:31.220004 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:37:31.220043 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:37:31.220092 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:37:31.220764 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:37:31.241429 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:37:31.260406 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:37:31.283362 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:37:31.301843 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1222 01:37:31.317919 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:37:31.342374 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:37:31.361800 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:37:31.382133 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:37:31.406168 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:37:31.423287 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:37:31.442648 2130151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:37:31.455594 2130151 ssh_runner.go:195] Run: openssl version
	I1222 01:37:31.461852 2130151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:37:31.469628 2130151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:37:31.478091 2130151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:37:31.483007 2130151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:37:31.483114 2130151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:37:31.525771 2130151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:37:31.532937 2130151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:37:31.539804 2130151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:37:31.546917 2130151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:37:31.550673 2130151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:37:31.550762 2130151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:37:31.591164 2130151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:37:31.600272 2130151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:37:31.608384 2130151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:37:31.616885 2130151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:37:31.621174 2130151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:37:31.621266 2130151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:37:31.663758 2130151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:37:31.671328 2130151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:37:31.675031 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:37:31.716968 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:37:31.758744 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:37:31.801176 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:37:31.852261 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:37:31.911893 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:37:31.969935 2130151 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:37:31.970073 2130151 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:37:31.970168 2130151 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:37:32.047279 2130151 cri.go:96] found id: "905af8f5658bbafff6fbdf5bb6452d014ca2090ce3f5e40e90cbd56785d1aad5"
	I1222 01:37:32.047348 2130151 cri.go:96] found id: "be7fb25ee13836e5779902fd877f2186e3cf9a31a21929cf6abc242f36e24232"
	I1222 01:37:32.047375 2130151 cri.go:96] found id: "d67aa42f388cdf6681fda3c9278147bee06c272bb914a742bf8d4661a93d37f4"
	I1222 01:37:32.047391 2130151 cri.go:96] found id: "0c4e1557aaf4ba6f8dd79484c5a6fdd993697b4d3a48944d92da7868590e39e8"
	I1222 01:37:32.047421 2130151 cri.go:96] found id: ""
	I1222 01:37:32.047487 2130151 ssh_runner.go:195] Run: sudo runc list -f json
	W1222 01:37:32.065017 2130151 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:37:32Z" level=error msg="open /run/runc: no such file or directory"
	I1222 01:37:32.065173 2130151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:37:32.080963 2130151 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:37:32.081021 2130151 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:37:32.081100 2130151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:37:32.092384 2130151 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:37:32.092875 2130151 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-733235" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:37:32.093067 2130151 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-733235" cluster setting kubeconfig missing "default-k8s-diff-port-733235" context setting]
	I1222 01:37:32.093471 2130151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:37:32.094898 2130151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:37:32.107280 2130151 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:37:32.107356 2130151 kubeadm.go:602] duration metric: took 26.302008ms to restartPrimaryControlPlane
	I1222 01:37:32.107378 2130151 kubeadm.go:403] duration metric: took 137.455548ms to StartCluster
	I1222 01:37:32.107424 2130151 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:37:32.107514 2130151 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:37:32.108179 2130151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:37:32.108443 2130151 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:37:32.108809 2130151 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:37:32.108879 2130151 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-733235"
	I1222 01:37:32.108893 2130151 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-733235"
	W1222 01:37:32.108899 2130151 addons.go:248] addon storage-provisioner should already be in state true
	I1222 01:37:32.108919 2130151 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:37:32.109395 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:32.109660 2130151 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:37:32.109772 2130151 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-733235"
	I1222 01:37:32.109817 2130151 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-733235"
	W1222 01:37:32.109836 2130151 addons.go:248] addon dashboard should already be in state true
	I1222 01:37:32.109883 2130151 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:37:32.110387 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:32.110758 2130151 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-733235"
	I1222 01:37:32.110789 2130151 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733235"
	I1222 01:37:32.111088 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:32.114255 2130151 out.go:179] * Verifying Kubernetes components...
	I1222 01:37:32.121176 2130151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:37:32.149078 2130151 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:37:32.152768 2130151 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:37:32.152791 2130151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:37:32.152858 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:32.187171 2130151 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:37:32.191461 2130151 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:37:32.192282 2130151 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-733235"
	W1222 01:37:32.192298 2130151 addons.go:248] addon default-storageclass should already be in state true
	I1222 01:37:32.192321 2130151 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:37:32.192733 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:32.197538 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:37:32.197570 2130151 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:37:32.197647 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:32.199384 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:32.266068 2130151 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:37:32.266095 2130151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:37:32.266171 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:32.270848 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:32.294620 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:32.476088 2130151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:37:32.532143 2130151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:37:32.592166 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:37:32.592196 2130151 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:37:32.636963 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:37:32.636989 2130151 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:37:32.645772 2130151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:37:32.727430 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:37:32.727458 2130151 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:37:32.784157 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:37:32.784181 2130151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:37:32.822456 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:37:32.822490 2130151 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:37:32.846989 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:37:32.847035 2130151 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:37:32.871944 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:37:32.871970 2130151 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:37:32.885736 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:37:32.885761 2130151 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:37:32.899998 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:37:32.900025 2130151 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:37:32.913154 2130151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:37:38.679134 2130151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.203011467s)
	I1222 01:37:38.679188 2130151 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.147022963s)
	I1222 01:37:38.679228 2130151 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733235" to be "Ready" ...
	I1222 01:37:38.679435 2130151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.033633478s)
	I1222 01:37:38.680004 2130151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.766814749s)
	I1222 01:37:38.683083 2130151 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-733235 addons enable metrics-server
	
	I1222 01:37:38.697862 2130151 node_ready.go:49] node "default-k8s-diff-port-733235" is "Ready"
	I1222 01:37:38.697946 2130151 node_ready.go:38] duration metric: took 18.683587ms for node "default-k8s-diff-port-733235" to be "Ready" ...
	I1222 01:37:38.697988 2130151 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:37:38.698091 2130151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:37:38.709530 2130151 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1222 01:37:38.712427 2130151 addons.go:530] duration metric: took 6.603611827s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1222 01:37:38.712505 2130151 api_server.go:72] duration metric: took 6.604005825s to wait for apiserver process to appear ...
	I1222 01:37:38.712527 2130151 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:37:38.712547 2130151 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1222 01:37:38.720716 2130151 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1222 01:37:38.721998 2130151 api_server.go:141] control plane version: v1.34.3
	I1222 01:37:38.722023 2130151 api_server.go:131] duration metric: took 9.48823ms to wait for apiserver health ...
	I1222 01:37:38.722033 2130151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:37:38.725674 2130151 system_pods.go:59] 8 kube-system pods found
	I1222 01:37:38.725754 2130151 system_pods.go:61] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:37:38.725768 2130151 system_pods.go:61] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:37:38.725774 2130151 system_pods.go:61] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:37:38.725780 2130151 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:37:38.725787 2130151 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:37:38.725791 2130151 system_pods.go:61] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:37:38.725797 2130151 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:37:38.725804 2130151 system_pods.go:61] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Running
	I1222 01:37:38.725812 2130151 system_pods.go:74] duration metric: took 3.774376ms to wait for pod list to return data ...
	I1222 01:37:38.725822 2130151 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:37:38.728351 2130151 default_sa.go:45] found service account: "default"
	I1222 01:37:38.728372 2130151 default_sa.go:55] duration metric: took 2.544399ms for default service account to be created ...
	I1222 01:37:38.728381 2130151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:37:38.731844 2130151 system_pods.go:86] 8 kube-system pods found
	I1222 01:37:38.731883 2130151 system_pods.go:89] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:37:38.731894 2130151 system_pods.go:89] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:37:38.731900 2130151 system_pods.go:89] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:37:38.731914 2130151 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:37:38.731922 2130151 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:37:38.731926 2130151 system_pods.go:89] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:37:38.731933 2130151 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:37:38.731945 2130151 system_pods.go:89] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Running
	I1222 01:37:38.731952 2130151 system_pods.go:126] duration metric: took 3.565201ms to wait for k8s-apps to be running ...
	I1222 01:37:38.731958 2130151 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:37:38.732026 2130151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:37:38.749710 2130151 system_svc.go:56] duration metric: took 17.740887ms WaitForService to wait for kubelet
	I1222 01:37:38.749738 2130151 kubeadm.go:587] duration metric: took 6.641240245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:37:38.749756 2130151 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:37:38.754006 2130151 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:37:38.754090 2130151 node_conditions.go:123] node cpu capacity is 2
	I1222 01:37:38.754136 2130151 node_conditions.go:105] duration metric: took 4.374383ms to run NodePressure ...
	I1222 01:37:38.754167 2130151 start.go:242] waiting for startup goroutines ...
	I1222 01:37:38.754188 2130151 start.go:247] waiting for cluster config update ...
	I1222 01:37:38.754229 2130151 start.go:256] writing updated cluster config ...
	I1222 01:37:38.754603 2130151 ssh_runner.go:195] Run: rm -f paused
	I1222 01:37:38.758814 2130151 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:37:38.763894 2130151 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xtnvg" in "kube-system" namespace to be "Ready" or be gone ...
	W1222 01:37:40.769178 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:42.770291 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:45.278098 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:47.768831 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:49.769278 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:51.769517 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:54.270589 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:56.769554 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:59.269969 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:01.769206 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:04.269605 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:06.270377 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:08.272146 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:10.769256 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	I1222 01:38:12.270065 2130151 pod_ready.go:94] pod "coredns-66bc5c9577-xtnvg" is "Ready"
	I1222 01:38:12.270153 2130151 pod_ready.go:86] duration metric: took 33.506165229s for pod "coredns-66bc5c9577-xtnvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.272860 2130151 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.277098 2130151 pod_ready.go:94] pod "etcd-default-k8s-diff-port-733235" is "Ready"
	I1222 01:38:12.277123 2130151 pod_ready.go:86] duration metric: took 4.236278ms for pod "etcd-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.279047 2130151 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.283176 2130151 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-733235" is "Ready"
	I1222 01:38:12.283202 2130151 pod_ready.go:86] duration metric: took 4.130443ms for pod "kube-apiserver-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.285312 2130151 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.468208 2130151 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-733235" is "Ready"
	I1222 01:38:12.468246 2130151 pod_ready.go:86] duration metric: took 182.886118ms for pod "kube-controller-manager-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.668307 2130151 pod_ready.go:83] waiting for pod "kube-proxy-j6fr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:13.068285 2130151 pod_ready.go:94] pod "kube-proxy-j6fr2" is "Ready"
	I1222 01:38:13.068311 2130151 pod_ready.go:86] duration metric: took 399.964376ms for pod "kube-proxy-j6fr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:13.268189 2130151 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:13.668430 2130151 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-733235" is "Ready"
	I1222 01:38:13.668462 2130151 pod_ready.go:86] duration metric: took 400.230206ms for pod "kube-scheduler-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:13.668476 2130151 pod_ready.go:40] duration metric: took 34.909582309s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:38:13.721024 2130151 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:38:13.724282 2130151 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-733235" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.567609184Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb23d6c8-d2e8-4892-b766-0fbc18e4d15e name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.568735033Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fa3d9dcf-4135-43e3-a653-f1c34d26389a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.56882993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.573847945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.574136945Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f28514f509f3b4d6cc4b080a7ad4cc43e1aaedd8e6a2110d9049e7124f68f5b2/merged/etc/passwd: no such file or directory"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.57424415Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f28514f509f3b4d6cc4b080a7ad4cc43e1aaedd8e6a2110d9049e7124f68f5b2/merged/etc/group: no such file or directory"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.574558233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.592944402Z" level=info msg="Created container f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01: kube-system/storage-provisioner/storage-provisioner" id=fa3d9dcf-4135-43e3-a653-f1c34d26389a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.604786678Z" level=info msg="Starting container: f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01" id=77d96f31-fc65-4766-93bb-6c813f9ad7dd name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.606993651Z" level=info msg="Started container" PID=1651 containerID=f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01 description=kube-system/storage-provisioner/storage-provisioner id=77d96f31-fc65-4766-93bb-6c813f9ad7dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=1eb235cea5e350ab6316e42e41f663cc1da6fc592e64f2c65599e068e4815a97
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.443648685Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.450817872Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.450848624Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.45086779Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.453962991Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.454095976Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.454189175Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.45738011Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.457409902Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.457429175Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.460247626Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.460276934Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.460293064Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.463405914Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.46344355Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f6d6f6396fe13       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   1eb235cea5e35       storage-provisioner                                    kube-system
	c0036ba277be0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   ba56e39d5b6b1       dashboard-metrics-scraper-6ffb444bf9-87c6b             kubernetes-dashboard
	234dc7879d1d2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago      Running             kubernetes-dashboard        0                   4b12a0d16b24f       kubernetes-dashboard-855c9754f9-2sjcl                  kubernetes-dashboard
	e61219fe02447       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   c66db85941c83       busybox                                                default
	893aacc74e612       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   0cc8b08569b32       coredns-66bc5c9577-xtnvg                               kube-system
	1ee0cbd88ee07       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           50 seconds ago      Running             kindnet-cni                 1                   225855acf586d       kindnet-tnxxl                                          kube-system
	69c3f005ed94b       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                           50 seconds ago      Running             kube-proxy                  1                   ff40123cab014       kube-proxy-j6fr2                                       kube-system
	ed7b1ae749750       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   1eb235cea5e35       storage-provisioner                                    kube-system
	905af8f5658bb       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                           56 seconds ago      Running             kube-controller-manager     1                   7cf8d8bbfa6d0       kube-controller-manager-default-k8s-diff-port-733235   kube-system
	be7fb25ee1383       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                           56 seconds ago      Running             kube-apiserver              1                   8b788fafb4247       kube-apiserver-default-k8s-diff-port-733235            kube-system
	d67aa42f388cd       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                           56 seconds ago      Running             kube-scheduler              1                   9204146104e66       kube-scheduler-default-k8s-diff-port-733235            kube-system
	0c4e1557aaf4b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           56 seconds ago      Running             etcd                        1                   95ebeec67c2dd       etcd-default-k8s-diff-port-733235                      kube-system
	
	
	==> coredns [893aacc74e612f41dc6362e747257f3761995b16d65649fdccd3803289f042d2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52498 - 62345 "HINFO IN 365848151036176430.3163089285270450224. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014286674s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-733235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-733235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=default-k8s-diff-port-733235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_36_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-733235
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:38:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:38:07 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:38:07 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:38:07 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:38:07 +0000   Mon, 22 Dec 2025 01:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-733235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                6fde222b-2b74-47ec-8d2f-d10c695b4327
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-xtnvg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-default-k8s-diff-port-733235                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         109s
	  kube-system                 kindnet-tnxxl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-733235             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-733235    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-j6fr2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-733235             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-87c6b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2sjcl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 103s                 kube-proxy       
	  Normal   Starting                 49s                  kube-proxy       
	  Normal   Starting                 117s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  116s (x8 over 117s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 117s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x8 over 117s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    109s                 kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 109s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  109s                 kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     109s                 kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientPID
	  Normal   Starting                 109s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           105s                 node-controller  Node default-k8s-diff-port-733235 event: Registered Node default-k8s-diff-port-733235 in Controller
	  Normal   NodeReady                91s                  kubelet          Node default-k8s-diff-port-733235 status is now: NodeReady
	  Normal   Starting                 57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                  node-controller  Node default-k8s-diff-port-733235 event: Registered Node default-k8s-diff-port-733235 in Controller
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0c4e1557aaf4ba6f8dd79484c5a6fdd993697b4d3a48944d92da7868590e39e8] <==
	{"level":"warn","ts":"2025-12-22T01:37:35.849576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.869838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.882181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.898393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.915001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.945646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.957550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.985120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.001773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.017038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.033219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.049631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.065220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.081525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.109559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.125563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.187906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.198494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.213878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.228832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.243377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.277869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.291539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.304830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.353965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32836","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:38:28 up 1 day,  8:20,  0 user,  load average: 1.32, 2.35, 2.33
	Linux default-k8s-diff-port-733235 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ee0cbd88ee07483ade1428ffe942035401854e6ac8f7f373724b892d15f0bf1] <==
	I1222 01:37:38.162553       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:37:38.224139       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:37:38.224288       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:37:38.224301       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:37:38.224319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:37:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:37:38.442334       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:37:38.442408       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:37:38.442441       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:37:38.443289       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1222 01:38:08.442486       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1222 01:38:08.443628       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1222 01:38:08.443695       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1222 01:38:08.443728       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1222 01:38:09.843554       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:38:09.843620       1 metrics.go:72] Registering metrics
	I1222 01:38:09.843680       1 controller.go:711] "Syncing nftables rules"
	I1222 01:38:18.442564       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:38:18.442700       1 main.go:301] handling current node
	I1222 01:38:28.449406       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:38:28.449438       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be7fb25ee13836e5779902fd877f2186e3cf9a31a21929cf6abc242f36e24232] <==
	I1222 01:37:37.162795       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1222 01:37:37.181271       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1222 01:37:37.183441       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1222 01:37:37.184989       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1222 01:37:37.187542       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:37:37.187822       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:37:37.189618       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1222 01:37:37.189652       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1222 01:37:37.189761       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1222 01:37:37.217904       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1222 01:37:37.217938       1 policy_source.go:240] refreshing policies
	E1222 01:37:37.247754       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1222 01:37:37.254878       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1222 01:37:37.264622       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:37:37.465893       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1222 01:37:37.894990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:37:38.188938       1 controller.go:667] quota admission added evaluator for: namespaces
	I1222 01:37:38.257295       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1222 01:37:38.308155       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:37:38.360809       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:37:38.585544       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.255.80"}
	I1222 01:37:38.607602       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.114.43"}
	I1222 01:37:40.430611       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:37:40.774915       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1222 01:37:40.824995       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [905af8f5658bbafff6fbdf5bb6452d014ca2090ce3f5e40e90cbd56785d1aad5] <==
	I1222 01:37:40.377625       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:37:40.383221       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1222 01:37:40.385446       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1222 01:37:40.397736       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1222 01:37:40.398960       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:37:40.404285       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:37:40.405275       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1222 01:37:40.405350       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:37:40.407731       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1222 01:37:40.411967       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1222 01:37:40.416426       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1222 01:37:40.417395       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1222 01:37:40.417452       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1222 01:37:40.417442       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:37:40.417504       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1222 01:37:40.417512       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1222 01:37:40.417631       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1222 01:37:40.417393       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1222 01:37:40.439166       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1222 01:37:40.439336       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1222 01:37:40.439397       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1222 01:37:40.439437       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1222 01:37:40.439465       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1222 01:37:40.441213       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1222 01:37:40.442098       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [69c3f005ed94b1ccf31b7a27d3116e7ae80725f8604454ff4ed94158879868d6] <==
	I1222 01:37:38.421199       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:37:38.625594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:37:38.726478       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:37:38.726510       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1222 01:37:38.726591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:37:38.750311       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:37:38.750359       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:37:38.754006       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:37:38.754280       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:37:38.754296       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:37:38.755347       1 config.go:200] "Starting service config controller"
	I1222 01:37:38.755413       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:37:38.764300       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:37:38.769628       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:37:38.766560       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:37:38.769744       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:37:38.765231       1 config.go:309] "Starting node config controller"
	I1222 01:37:38.769810       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:37:38.769838       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:37:38.855697       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:37:38.869838       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1222 01:37:38.869840       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d67aa42f388cdf6681fda3c9278147bee06c272bb914a742bf8d4661a93d37f4] <==
	I1222 01:37:35.012600       1 serving.go:386] Generated self-signed cert in-memory
	W1222 01:37:37.081676       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1222 01:37:37.081711       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1222 01:37:37.081721       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1222 01:37:37.081729       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1222 01:37:37.175611       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1222 01:37:37.175648       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:37:37.183539       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 01:37:37.184190       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:37:37.184228       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:37:37.184306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1222 01:37:37.284374       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:41.076166     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4x48\" (UniqueName: \"kubernetes.io/projected/b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5-kube-api-access-l4x48\") pod \"dashboard-metrics-scraper-6ffb444bf9-87c6b\" (UID: \"b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b"
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:41.076239     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-87c6b\" (UID: \"b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b"
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:41.076266     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/585268bd-d9fd-4358-8e2e-6805b573af31-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-2sjcl\" (UID: \"585268bd-d9fd-4358-8e2e-6805b573af31\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2sjcl"
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:41.076283     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585sx\" (UniqueName: \"kubernetes.io/projected/585268bd-d9fd-4358-8e2e-6805b573af31-kube-api-access-585sx\") pod \"kubernetes-dashboard-855c9754f9-2sjcl\" (UID: \"585268bd-d9fd-4358-8e2e-6805b573af31\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2sjcl"
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: W1222 01:37:41.298178     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/crio-4b12a0d16b24fcee9b4e92eadd34ce7041d4f73ef2779fade7c4ced54d366ffe WatchSource:0}: Error finding container 4b12a0d16b24fcee9b4e92eadd34ce7041d4f73ef2779fade7c4ced54d366ffe: Status 404 returned error can't find the container with id 4b12a0d16b24fcee9b4e92eadd34ce7041d4f73ef2779fade7c4ced54d366ffe
	Dec 22 01:37:42 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:42.173030     783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 22 01:37:46 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:46.512360     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2sjcl" podStartSLOduration=1.9223210339999999 podStartE2EDuration="6.51233861s" podCreationTimestamp="2025-12-22 01:37:40 +0000 UTC" firstStartedPulling="2025-12-22 01:37:41.302007599 +0000 UTC m=+10.088067226" lastFinishedPulling="2025-12-22 01:37:45.892025183 +0000 UTC m=+14.678084802" observedRunningTime="2025-12-22 01:37:46.511800633 +0000 UTC m=+15.297860268" watchObservedRunningTime="2025-12-22 01:37:46.51233861 +0000 UTC m=+15.298398229"
	Dec 22 01:37:50 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:50.500803     783 scope.go:117] "RemoveContainer" containerID="967f593b8b8062f920020cb551dfae37c8a3d07d41a047bb0d20920d706b7d4f"
	Dec 22 01:37:51 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:51.505552     783 scope.go:117] "RemoveContainer" containerID="967f593b8b8062f920020cb551dfae37c8a3d07d41a047bb0d20920d706b7d4f"
	Dec 22 01:37:51 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:51.506781     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:37:51 default-k8s-diff-port-733235 kubelet[783]: E1222 01:37:51.514619     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:37:52 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:52.509697     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:37:52 default-k8s-diff-port-733235 kubelet[783]: E1222 01:37:52.509863     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:37:55 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:55.974264     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:37:55 default-k8s-diff-port-733235 kubelet[783]: E1222 01:37:55.974505     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:38:06 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:06.374017     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:38:06 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:06.542234     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:38:06 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:06.542520     783 scope.go:117] "RemoveContainer" containerID="c0036ba277be0d901db9cdb64b5a6e5e9b908f9c3bdb99057b6d375471fb4d12"
	Dec 22 01:38:06 default-k8s-diff-port-733235 kubelet[783]: E1222 01:38:06.544003     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:38:08 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:08.565819     783 scope.go:117] "RemoveContainer" containerID="ed7b1ae74975081543638e2810948f943f99e282f8acef543825b48f8f02bb31"
	Dec 22 01:38:15 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:15.973989     783 scope.go:117] "RemoveContainer" containerID="c0036ba277be0d901db9cdb64b5a6e5e9b908f9c3bdb99057b6d375471fb4d12"
	Dec 22 01:38:15 default-k8s-diff-port-733235 kubelet[783]: E1222 01:38:15.974185     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:38:25 default-k8s-diff-port-733235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 22 01:38:26 default-k8s-diff-port-733235 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 22 01:38:26 default-k8s-diff-port-733235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [234dc7879d1d2f4481efdf1307fa02e5e13fdc950270816a19c3e22111ef36f0] <==
	2025/12/22 01:37:45 Using namespace: kubernetes-dashboard
	2025/12/22 01:37:45 Using in-cluster config to connect to apiserver
	2025/12/22 01:37:45 Using secret token for csrf signing
	2025/12/22 01:37:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/22 01:37:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/22 01:37:45 Successful initial request to the apiserver, version: v1.34.3
	2025/12/22 01:37:45 Generating JWE encryption key
	2025/12/22 01:37:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/22 01:37:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/22 01:37:46 Initializing JWE encryption key from synchronized object
	2025/12/22 01:37:46 Creating in-cluster Sidecar client
	2025/12/22 01:37:46 Serving insecurely on HTTP port: 9090
	2025/12/22 01:37:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:38:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:37:45 Starting overwatch
	
	
	==> storage-provisioner [ed7b1ae74975081543638e2810948f943f99e282f8acef543825b48f8f02bb31] <==
	I1222 01:37:38.169877       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1222 01:38:08.259895       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01] <==
	I1222 01:38:08.627401       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:38:08.649718       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:38:08.649985       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1222 01:38:08.652464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:12.108154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:16.368895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:19.967205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:23.020688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:26.043019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:26.051462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:38:26.051613       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:38:26.051784       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733235_dece4b67-4a11-4d42-bfca-d38c6a57a5a1!
	I1222 01:38:26.052761       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11324e73-4990-4f66-8f18-92ad4f625ab0", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-733235_dece4b67-4a11-4d42-bfca-d38c6a57a5a1 became leader
	W1222 01:38:26.060598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:26.074333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:38:26.152813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733235_dece4b67-4a11-4d42-bfca-d38c6a57a5a1!
	W1222 01:38:28.076895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:28.083249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235: exit status 2 (362.990823ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-733235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-733235
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-733235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269",
	        "Created": "2025-12-22T01:36:14.986448621Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2130274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:37:24.80441533Z",
	            "FinishedAt": "2025-12-22T01:37:23.933145342Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/hostname",
	        "HostsPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/hosts",
	        "LogPath": "/var/lib/docker/containers/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269-json.log",
	        "Name": "/default-k8s-diff-port-733235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-733235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-733235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269",
	                "LowerDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be590ba0b0a6654ae5d8d5466d97281fd24a7707873963e3e676b1ec6b7bccab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-733235",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-733235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-733235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-733235",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-733235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c1c603c7b3a47f992315171ef58e621f205cfd83798149c1c2bae2d062f24f4",
	            "SandboxKey": "/var/run/docker/netns/4c1c603c7b3a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38771"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38772"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38775"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38773"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38774"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-733235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:b2:35:a7:fe:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "587a11d4872e57b94e293749bb81cafed3a21b4741fd9542ba2b75e1a005d466",
	                    "EndpointID": "1a3cf427d9ac22323ac30b190f2274ebacaa2b76037a30227e3eae7853bda982",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-733235",
	                        "7c78b7294c07"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235: exit status 2 (329.618941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-733235 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-733235 logs -n 25: (1.271744785s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-031738       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ delete  │ -p cert-expiration-031738                                                                                                                                                                                                                     │ cert-expiration-031738       │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │ 22 Dec 25 01:32 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                               │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                     │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                   │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                         │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                         │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                               │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:37:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:37:24.520921 2130151 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:37:24.521104 2130151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:37:24.521136 2130151 out.go:374] Setting ErrFile to fd 2...
	I1222 01:37:24.521155 2130151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:37:24.521474 2130151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:37:24.521883 2130151 out.go:368] Setting JSON to false
	I1222 01:37:24.522804 2130151 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116394,"bootTime":1766251051,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:37:24.522895 2130151 start.go:143] virtualization:  
	I1222 01:37:24.526109 2130151 out.go:179] * [default-k8s-diff-port-733235] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:37:24.530025 2130151 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:37:24.530096 2130151 notify.go:221] Checking for updates...
	I1222 01:37:24.534202 2130151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:37:24.536991 2130151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:37:24.539877 2130151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:37:24.542759 2130151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:37:24.545567 2130151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:37:24.548746 2130151 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:37:24.549489 2130151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:37:24.579714 2130151 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:37:24.579829 2130151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:37:24.648678 2130151 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:37:24.63935489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:37:24.648784 2130151 docker.go:319] overlay module found
	I1222 01:37:24.651964 2130151 out.go:179] * Using the docker driver based on existing profile
	I1222 01:37:24.654812 2130151 start.go:309] selected driver: docker
	I1222 01:37:24.654835 2130151 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:37:24.654935 2130151 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:37:24.655664 2130151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:37:24.721952 2130151 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:37:24.71238698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:37:24.722284 2130151 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:37:24.722316 2130151 cni.go:84] Creating CNI manager for ""
	I1222 01:37:24.722365 2130151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:37:24.722410 2130151 start.go:353] cluster config:
	{Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:37:24.725666 2130151 out.go:179] * Starting "default-k8s-diff-port-733235" primary control-plane node in "default-k8s-diff-port-733235" cluster
	I1222 01:37:24.728575 2130151 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:37:24.731444 2130151 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:37:24.734222 2130151 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:37:24.734269 2130151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:37:24.734281 2130151 cache.go:65] Caching tarball of preloaded images
	I1222 01:37:24.734280 2130151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:37:24.734362 2130151 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:37:24.734372 2130151 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:37:24.734481 2130151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/config.json ...
	I1222 01:37:24.753888 2130151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:37:24.753912 2130151 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:37:24.753926 2130151 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:37:24.753971 2130151 start.go:360] acquireMachinesLock for default-k8s-diff-port-733235: {Name:mk42f93f802a61825395428c6e75b6089866c8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:37:24.754033 2130151 start.go:364] duration metric: took 37.931µs to acquireMachinesLock for "default-k8s-diff-port-733235"
	I1222 01:37:24.754058 2130151 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:37:24.754068 2130151 fix.go:54] fixHost starting: 
	I1222 01:37:24.754330 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:24.771097 2130151 fix.go:112] recreateIfNeeded on default-k8s-diff-port-733235: state=Stopped err=<nil>
	W1222 01:37:24.771128 2130151 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:37:24.445159 2113859 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000649464s
	I1222 01:37:24.445188 2113859 kubeadm.go:319] 
	I1222 01:37:24.445245 2113859 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:37:24.445278 2113859 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:37:24.445510 2113859 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:37:24.445521 2113859 kubeadm.go:319] 
	I1222 01:37:24.445625 2113859 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:37:24.445657 2113859 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:37:24.445688 2113859 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:37:24.445692 2113859 kubeadm.go:319] 
	I1222 01:37:24.449085 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:37:24.449561 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:37:24.449681 2113859 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:37:24.449965 2113859 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:37:24.449971 2113859 kubeadm.go:319] 
	I1222 01:37:24.450044 2113859 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:37:24.450217 2113859 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-491054] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000649464s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:37:24.450303 2113859 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 01:37:24.890509 2113859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:37:24.915976 2113859 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:37:24.916036 2113859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:37:24.926777 2113859 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:37:24.926795 2113859 kubeadm.go:158] found existing configuration files:
	
	I1222 01:37:24.926846 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:37:24.935332 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:37:24.935391 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:37:24.943361 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:37:24.952220 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:37:24.952286 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:37:24.960609 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:37:24.969890 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:37:24.969948 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:37:24.978647 2113859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:37:24.987603 2113859 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:37:24.987662 2113859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:37:24.999818 2113859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:37:25.076514 2113859 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:37:25.087372 2113859 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:37:25.206650 2113859 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:37:25.206722 2113859 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:37:25.206758 2113859 kubeadm.go:319] OS: Linux
	I1222 01:37:25.206804 2113859 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:37:25.206853 2113859 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:37:25.206900 2113859 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:37:25.206949 2113859 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:37:25.206997 2113859 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:37:25.207051 2113859 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:37:25.207097 2113859 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:37:25.207145 2113859 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:37:25.207191 2113859 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:37:25.290369 2113859 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:37:25.290485 2113859 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:37:25.290577 2113859 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:37:25.313723 2113859 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:37:25.319101 2113859 out.go:252]   - Generating certificates and keys ...
	I1222 01:37:25.319194 2113859 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:37:25.319264 2113859 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:37:25.319343 2113859 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:37:25.319409 2113859 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:37:25.319483 2113859 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:37:25.319543 2113859 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:37:25.319610 2113859 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:37:25.319675 2113859 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:37:25.319755 2113859 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:37:25.319830 2113859 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:37:25.320128 2113859 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:37:25.320259 2113859 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:37:25.579598 2113859 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:37:26.290496 2113859 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:37:26.459743 2113859 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:37:26.687849 2113859 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:37:27.003327 2113859 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:37:27.003462 2113859 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:37:27.005755 2113859 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:37:27.009338 2113859 out.go:252]   - Booting up control plane ...
	I1222 01:37:27.009453 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:37:27.009975 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:37:27.011908 2113859 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:37:27.027550 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:37:27.027957 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:37:27.035510 2113859 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:37:27.035816 2113859 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:37:27.035861 2113859 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:37:27.169794 2113859 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:37:27.169917 2113859 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:37:24.774213 2130151 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-733235" ...
	I1222 01:37:24.774300 2130151 cli_runner.go:164] Run: docker start default-k8s-diff-port-733235
	I1222 01:37:25.035402 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:25.059750 2130151 kic.go:430] container "default-k8s-diff-port-733235" state is running.
	I1222 01:37:25.060323 2130151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:37:25.087238 2130151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/config.json ...
	I1222 01:37:25.087466 2130151 machine.go:94] provisionDockerMachine start ...
	I1222 01:37:25.087534 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:25.106881 2130151 main.go:144] libmachine: Using SSH client type: native
	I1222 01:37:25.107215 2130151 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38771 <nil> <nil>}
	I1222 01:37:25.107244 2130151 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:37:25.107874 2130151 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49652->127.0.0.1:38771: read: connection reset by peer
	I1222 01:37:28.240802 2130151 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733235
	
	I1222 01:37:28.240828 2130151 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-733235"
	I1222 01:37:28.240900 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:28.258108 2130151 main.go:144] libmachine: Using SSH client type: native
	I1222 01:37:28.258418 2130151 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38771 <nil> <nil>}
	I1222 01:37:28.258435 2130151 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733235 && echo "default-k8s-diff-port-733235" | sudo tee /etc/hostname
	I1222 01:37:28.401917 2130151 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733235
	
	I1222 01:37:28.402068 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:28.419290 2130151 main.go:144] libmachine: Using SSH client type: native
	I1222 01:37:28.419607 2130151 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38771 <nil> <nil>}
	I1222 01:37:28.419625 2130151 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733235/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:37:28.549530 2130151 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:37:28.549556 2130151 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:37:28.549584 2130151 ubuntu.go:190] setting up certificates
	I1222 01:37:28.549605 2130151 provision.go:84] configureAuth start
	I1222 01:37:28.549682 2130151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:37:28.569707 2130151 provision.go:143] copyHostCerts
	I1222 01:37:28.569777 2130151 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:37:28.569800 2130151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:37:28.569880 2130151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:37:28.569981 2130151 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:37:28.569991 2130151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:37:28.570017 2130151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:37:28.570071 2130151 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:37:28.570080 2130151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:37:28.570104 2130151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:37:28.570192 2130151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733235 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-733235 localhost minikube]
	I1222 01:37:28.689451 2130151 provision.go:177] copyRemoteCerts
	I1222 01:37:28.689577 2130151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:37:28.689650 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:28.706730 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:28.800993 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:37:28.817800 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1222 01:37:28.834969 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:37:28.851928 2130151 provision.go:87] duration metric: took 302.295966ms to configureAuth
	I1222 01:37:28.851997 2130151 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:37:28.852215 2130151 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:37:28.852351 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:28.869373 2130151 main.go:144] libmachine: Using SSH client type: native
	I1222 01:37:28.869687 2130151 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38771 <nil> <nil>}
	I1222 01:37:28.869710 2130151 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:37:29.211321 2130151 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:37:29.211343 2130151 machine.go:97] duration metric: took 4.123860423s to provisionDockerMachine
	I1222 01:37:29.211354 2130151 start.go:293] postStartSetup for "default-k8s-diff-port-733235" (driver="docker")
	I1222 01:37:29.211364 2130151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:37:29.211422 2130151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:37:29.211471 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:29.229034 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:29.325252 2130151 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:37:29.328582 2130151 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:37:29.328614 2130151 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:37:29.328625 2130151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:37:29.328677 2130151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:37:29.328765 2130151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:37:29.328866 2130151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:37:29.336175 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:37:29.355719 2130151 start.go:296] duration metric: took 144.351069ms for postStartSetup
	I1222 01:37:29.355841 2130151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:37:29.355929 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:29.375688 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:29.470411 2130151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:37:29.475240 2130151 fix.go:56] duration metric: took 4.721166126s for fixHost
	I1222 01:37:29.475265 2130151 start.go:83] releasing machines lock for "default-k8s-diff-port-733235", held for 4.721217693s
	I1222 01:37:29.475351 2130151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-733235
	I1222 01:37:29.492521 2130151 ssh_runner.go:195] Run: cat /version.json
	I1222 01:37:29.492571 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:29.492609 2130151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:37:29.492668 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:29.513441 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:29.514261 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:29.705168 2130151 ssh_runner.go:195] Run: systemctl --version
	I1222 01:37:29.711330 2130151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:37:29.746777 2130151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:37:29.751022 2130151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:37:29.751094 2130151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:37:29.758432 2130151 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:37:29.758456 2130151 start.go:496] detecting cgroup driver to use...
	I1222 01:37:29.758512 2130151 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:37:29.758564 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:37:29.773449 2130151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:37:29.786043 2130151 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:37:29.786131 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:37:29.801449 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:37:29.814427 2130151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:37:29.931330 2130151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:37:30.052296 2130151 docker.go:234] disabling docker service ...
	I1222 01:37:30.052373 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:37:30.069405 2130151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:37:30.082905 2130151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:37:30.221538 2130151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:37:30.337482 2130151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:37:30.350440 2130151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:37:30.364189 2130151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:37:30.364272 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.373292 2130151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:37:30.373397 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.382333 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.391178 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.399958 2130151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:37:30.408024 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.416948 2130151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.425658 2130151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:37:30.434526 2130151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:37:30.442378 2130151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:37:30.450115 2130151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:37:30.560535 2130151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:37:30.743873 2130151 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:37:30.743985 2130151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:37:30.747907 2130151 start.go:564] Will wait 60s for crictl version
	I1222 01:37:30.747986 2130151 ssh_runner.go:195] Run: which crictl
	I1222 01:37:30.751442 2130151 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:37:30.775633 2130151 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:37:30.775715 2130151 ssh_runner.go:195] Run: crio --version
	I1222 01:37:30.807550 2130151 ssh_runner.go:195] Run: crio --version
	I1222 01:37:30.843817 2130151 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:37:30.846644 2130151 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-733235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:37:30.867154 2130151 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:37:30.871473 2130151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:37:30.881795 2130151 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:37:30.881929 2130151 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:37:30.881990 2130151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:37:30.920370 2130151 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:37:30.920398 2130151 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:37:30.920461 2130151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:37:30.949936 2130151 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:37:30.949961 2130151 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:37:30.949970 2130151 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.3 crio true true} ...
	I1222 01:37:30.950078 2130151 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-733235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:37:30.950163 2130151 ssh_runner.go:195] Run: crio config
	I1222 01:37:31.015583 2130151 cni.go:84] Creating CNI manager for ""
	I1222 01:37:31.015609 2130151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:37:31.015631 2130151 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:37:31.015655 2130151 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733235 NodeName:default-k8s-diff-port-733235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:37:31.015788 2130151 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733235"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:37:31.015875 2130151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:37:31.023758 2130151 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:37:31.023849 2130151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:37:31.031328 2130151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1222 01:37:31.043495 2130151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:37:31.056679 2130151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1222 01:37:31.069545 2130151 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:37:31.073244 2130151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:37:31.083166 2130151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:37:31.197352 2130151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:37:31.219237 2130151 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235 for IP: 192.168.85.2
	I1222 01:37:31.219270 2130151 certs.go:195] generating shared ca certs ...
	I1222 01:37:31.219288 2130151 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:37:31.219445 2130151 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:37:31.219501 2130151 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:37:31.219512 2130151 certs.go:257] generating profile certs ...
	I1222 01:37:31.219603 2130151 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.key
	I1222 01:37:31.219687 2130151 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key.be3f76d2
	I1222 01:37:31.219741 2130151 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.key
	I1222 01:37:31.219864 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:37:31.219923 2130151 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:37:31.219937 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:37:31.219976 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:37:31.220004 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:37:31.220043 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:37:31.220092 2130151 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:37:31.220764 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:37:31.241429 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:37:31.260406 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:37:31.283362 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:37:31.301843 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1222 01:37:31.317919 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:37:31.342374 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:37:31.361800 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:37:31.382133 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:37:31.406168 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:37:31.423287 2130151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:37:31.442648 2130151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:37:31.455594 2130151 ssh_runner.go:195] Run: openssl version
	I1222 01:37:31.461852 2130151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:37:31.469628 2130151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:37:31.478091 2130151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:37:31.483007 2130151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:37:31.483114 2130151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:37:31.525771 2130151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:37:31.532937 2130151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:37:31.539804 2130151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:37:31.546917 2130151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:37:31.550673 2130151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:37:31.550762 2130151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:37:31.591164 2130151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:37:31.600272 2130151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:37:31.608384 2130151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:37:31.616885 2130151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:37:31.621174 2130151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:37:31.621266 2130151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:37:31.663758 2130151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:37:31.671328 2130151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:37:31.675031 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:37:31.716968 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:37:31.758744 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:37:31.801176 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:37:31.852261 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:37:31.911893 2130151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:37:31.969935 2130151 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-733235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-733235 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:37:31.970073 2130151 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:37:31.970168 2130151 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:37:32.047279 2130151 cri.go:96] found id: "905af8f5658bbafff6fbdf5bb6452d014ca2090ce3f5e40e90cbd56785d1aad5"
	I1222 01:37:32.047348 2130151 cri.go:96] found id: "be7fb25ee13836e5779902fd877f2186e3cf9a31a21929cf6abc242f36e24232"
	I1222 01:37:32.047375 2130151 cri.go:96] found id: "d67aa42f388cdf6681fda3c9278147bee06c272bb914a742bf8d4661a93d37f4"
	I1222 01:37:32.047391 2130151 cri.go:96] found id: "0c4e1557aaf4ba6f8dd79484c5a6fdd993697b4d3a48944d92da7868590e39e8"
	I1222 01:37:32.047421 2130151 cri.go:96] found id: ""
	I1222 01:37:32.047487 2130151 ssh_runner.go:195] Run: sudo runc list -f json
	W1222 01:37:32.065017 2130151 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:37:32Z" level=error msg="open /run/runc: no such file or directory"
	I1222 01:37:32.065173 2130151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:37:32.080963 2130151 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:37:32.081021 2130151 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:37:32.081100 2130151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:37:32.092384 2130151 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:37:32.092875 2130151 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-733235" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:37:32.093067 2130151 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-733235" cluster setting kubeconfig missing "default-k8s-diff-port-733235" context setting]
	I1222 01:37:32.093471 2130151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:37:32.094898 2130151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:37:32.107280 2130151 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:37:32.107356 2130151 kubeadm.go:602] duration metric: took 26.302008ms to restartPrimaryControlPlane
	I1222 01:37:32.107378 2130151 kubeadm.go:403] duration metric: took 137.455548ms to StartCluster
	I1222 01:37:32.107424 2130151 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:37:32.107514 2130151 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:37:32.108179 2130151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:37:32.108443 2130151 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:37:32.108809 2130151 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:37:32.108879 2130151 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-733235"
	I1222 01:37:32.108893 2130151 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-733235"
	W1222 01:37:32.108899 2130151 addons.go:248] addon storage-provisioner should already be in state true
	I1222 01:37:32.108919 2130151 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:37:32.109395 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:32.109660 2130151 config.go:182] Loaded profile config "default-k8s-diff-port-733235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:37:32.109772 2130151 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-733235"
	I1222 01:37:32.109817 2130151 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-733235"
	W1222 01:37:32.109836 2130151 addons.go:248] addon dashboard should already be in state true
	I1222 01:37:32.109883 2130151 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:37:32.110387 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:32.110758 2130151 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-733235"
	I1222 01:37:32.110789 2130151 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733235"
	I1222 01:37:32.111088 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:32.114255 2130151 out.go:179] * Verifying Kubernetes components...
	I1222 01:37:32.121176 2130151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:37:32.149078 2130151 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:37:32.152768 2130151 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:37:32.152791 2130151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:37:32.152858 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:32.187171 2130151 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:37:32.191461 2130151 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:37:32.192282 2130151 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-733235"
	W1222 01:37:32.192298 2130151 addons.go:248] addon default-storageclass should already be in state true
	I1222 01:37:32.192321 2130151 host.go:66] Checking if "default-k8s-diff-port-733235" exists ...
	I1222 01:37:32.192733 2130151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-733235 --format={{.State.Status}}
	I1222 01:37:32.197538 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:37:32.197570 2130151 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:37:32.197647 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:32.199384 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:32.266068 2130151 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:37:32.266095 2130151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:37:32.266171 2130151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-733235
	I1222 01:37:32.270848 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:32.294620 2130151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38771 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/default-k8s-diff-port-733235/id_rsa Username:docker}
	I1222 01:37:32.476088 2130151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:37:32.532143 2130151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:37:32.592166 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:37:32.592196 2130151 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:37:32.636963 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:37:32.636989 2130151 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:37:32.645772 2130151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:37:32.727430 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:37:32.727458 2130151 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:37:32.784157 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:37:32.784181 2130151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:37:32.822456 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:37:32.822490 2130151 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:37:32.846989 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:37:32.847035 2130151 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:37:32.871944 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:37:32.871970 2130151 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:37:32.885736 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:37:32.885761 2130151 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:37:32.899998 2130151 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:37:32.900025 2130151 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:37:32.913154 2130151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:37:38.679134 2130151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.203011467s)
	I1222 01:37:38.679188 2130151 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.147022963s)
	I1222 01:37:38.679228 2130151 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733235" to be "Ready" ...
	I1222 01:37:38.679435 2130151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.033633478s)
	I1222 01:37:38.680004 2130151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.766814749s)
	I1222 01:37:38.683083 2130151 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-733235 addons enable metrics-server
	
	I1222 01:37:38.697862 2130151 node_ready.go:49] node "default-k8s-diff-port-733235" is "Ready"
	I1222 01:37:38.697946 2130151 node_ready.go:38] duration metric: took 18.683587ms for node "default-k8s-diff-port-733235" to be "Ready" ...
	I1222 01:37:38.697988 2130151 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:37:38.698091 2130151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:37:38.709530 2130151 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1222 01:37:38.712427 2130151 addons.go:530] duration metric: took 6.603611827s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1222 01:37:38.712505 2130151 api_server.go:72] duration metric: took 6.604005825s to wait for apiserver process to appear ...
	I1222 01:37:38.712527 2130151 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:37:38.712547 2130151 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1222 01:37:38.720716 2130151 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1222 01:37:38.721998 2130151 api_server.go:141] control plane version: v1.34.3
	I1222 01:37:38.722023 2130151 api_server.go:131] duration metric: took 9.48823ms to wait for apiserver health ...
	I1222 01:37:38.722033 2130151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:37:38.725674 2130151 system_pods.go:59] 8 kube-system pods found
	I1222 01:37:38.725754 2130151 system_pods.go:61] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:37:38.725768 2130151 system_pods.go:61] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:37:38.725774 2130151 system_pods.go:61] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:37:38.725780 2130151 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:37:38.725787 2130151 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:37:38.725791 2130151 system_pods.go:61] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:37:38.725797 2130151 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:37:38.725804 2130151 system_pods.go:61] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Running
	I1222 01:37:38.725812 2130151 system_pods.go:74] duration metric: took 3.774376ms to wait for pod list to return data ...
	I1222 01:37:38.725822 2130151 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:37:38.728351 2130151 default_sa.go:45] found service account: "default"
	I1222 01:37:38.728372 2130151 default_sa.go:55] duration metric: took 2.544399ms for default service account to be created ...
	I1222 01:37:38.728381 2130151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:37:38.731844 2130151 system_pods.go:86] 8 kube-system pods found
	I1222 01:37:38.731883 2130151 system_pods.go:89] "coredns-66bc5c9577-xtnvg" [79571637-a410-4e69-a9f6-6fb2d8a9fa55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:37:38.731894 2130151 system_pods.go:89] "etcd-default-k8s-diff-port-733235" [40b1a0b8-9cf2-4fe6-bb90-cb2caf675f7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:37:38.731900 2130151 system_pods.go:89] "kindnet-tnxxl" [0ae022fe-4121-4114-951f-be8d91fa4da1] Running
	I1222 01:37:38.731914 2130151 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733235" [84100065-87ab-4710-8f57-7633be0761e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:37:38.731922 2130151 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733235" [8833013b-d77f-423d-843f-9ff87dc4776a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:37:38.731926 2130151 system_pods.go:89] "kube-proxy-j6fr2" [88ac4eac-fa44-4275-b221-2f761c9d3dd3] Running
	I1222 01:37:38.731933 2130151 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733235" [5cc9f890-5a9c-448f-8784-9713c4273f11] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:37:38.731945 2130151 system_pods.go:89] "storage-provisioner" [12ef3141-54f6-41b1-abd5-2dc491507ff8] Running
	I1222 01:37:38.731952 2130151 system_pods.go:126] duration metric: took 3.565201ms to wait for k8s-apps to be running ...
	I1222 01:37:38.731958 2130151 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:37:38.732026 2130151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:37:38.749710 2130151 system_svc.go:56] duration metric: took 17.740887ms WaitForService to wait for kubelet
	I1222 01:37:38.749738 2130151 kubeadm.go:587] duration metric: took 6.641240245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:37:38.749756 2130151 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:37:38.754006 2130151 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:37:38.754090 2130151 node_conditions.go:123] node cpu capacity is 2
	I1222 01:37:38.754136 2130151 node_conditions.go:105] duration metric: took 4.374383ms to run NodePressure ...
	I1222 01:37:38.754167 2130151 start.go:242] waiting for startup goroutines ...
	I1222 01:37:38.754188 2130151 start.go:247] waiting for cluster config update ...
	I1222 01:37:38.754229 2130151 start.go:256] writing updated cluster config ...
	I1222 01:37:38.754603 2130151 ssh_runner.go:195] Run: rm -f paused
	I1222 01:37:38.758814 2130151 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:37:38.763894 2130151 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xtnvg" in "kube-system" namespace to be "Ready" or be gone ...
	W1222 01:37:40.769178 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:42.770291 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:45.278098 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:47.768831 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:49.769278 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:51.769517 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:54.270589 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:56.769554 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:37:59.269969 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:01.769206 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:04.269605 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:06.270377 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:08.272146 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	W1222 01:38:10.769256 2130151 pod_ready.go:104] pod "coredns-66bc5c9577-xtnvg" is not "Ready", error: <nil>
	I1222 01:38:12.270065 2130151 pod_ready.go:94] pod "coredns-66bc5c9577-xtnvg" is "Ready"
	I1222 01:38:12.270153 2130151 pod_ready.go:86] duration metric: took 33.506165229s for pod "coredns-66bc5c9577-xtnvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.272860 2130151 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.277098 2130151 pod_ready.go:94] pod "etcd-default-k8s-diff-port-733235" is "Ready"
	I1222 01:38:12.277123 2130151 pod_ready.go:86] duration metric: took 4.236278ms for pod "etcd-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.279047 2130151 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.283176 2130151 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-733235" is "Ready"
	I1222 01:38:12.283202 2130151 pod_ready.go:86] duration metric: took 4.130443ms for pod "kube-apiserver-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.285312 2130151 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.468208 2130151 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-733235" is "Ready"
	I1222 01:38:12.468246 2130151 pod_ready.go:86] duration metric: took 182.886118ms for pod "kube-controller-manager-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:12.668307 2130151 pod_ready.go:83] waiting for pod "kube-proxy-j6fr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:13.068285 2130151 pod_ready.go:94] pod "kube-proxy-j6fr2" is "Ready"
	I1222 01:38:13.068311 2130151 pod_ready.go:86] duration metric: took 399.964376ms for pod "kube-proxy-j6fr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:13.268189 2130151 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:13.668430 2130151 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-733235" is "Ready"
	I1222 01:38:13.668462 2130151 pod_ready.go:86] duration metric: took 400.230206ms for pod "kube-scheduler-default-k8s-diff-port-733235" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:38:13.668476 2130151 pod_ready.go:40] duration metric: took 34.909582309s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:38:13.721024 2130151 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:38:13.724282 2130151 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-733235" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.567609184Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb23d6c8-d2e8-4892-b766-0fbc18e4d15e name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.568735033Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fa3d9dcf-4135-43e3-a653-f1c34d26389a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.56882993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.573847945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.574136945Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f28514f509f3b4d6cc4b080a7ad4cc43e1aaedd8e6a2110d9049e7124f68f5b2/merged/etc/passwd: no such file or directory"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.57424415Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f28514f509f3b4d6cc4b080a7ad4cc43e1aaedd8e6a2110d9049e7124f68f5b2/merged/etc/group: no such file or directory"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.574558233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.592944402Z" level=info msg="Created container f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01: kube-system/storage-provisioner/storage-provisioner" id=fa3d9dcf-4135-43e3-a653-f1c34d26389a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.604786678Z" level=info msg="Starting container: f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01" id=77d96f31-fc65-4766-93bb-6c813f9ad7dd name=/runtime.v1.RuntimeService/StartContainer
	Dec 22 01:38:08 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:08.606993651Z" level=info msg="Started container" PID=1651 containerID=f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01 description=kube-system/storage-provisioner/storage-provisioner id=77d96f31-fc65-4766-93bb-6c813f9ad7dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=1eb235cea5e350ab6316e42e41f663cc1da6fc592e64f2c65599e068e4815a97
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.443648685Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.450817872Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.450848624Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.45086779Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.453962991Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.454095976Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.454189175Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.45738011Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.457409902Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.457429175Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.460247626Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.460276934Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.460293064Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.463405914Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 22 01:38:18 default-k8s-diff-port-733235 crio[653]: time="2025-12-22T01:38:18.46344355Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f6d6f6396fe13       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   1eb235cea5e35       storage-provisioner                                    kube-system
	c0036ba277be0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   ba56e39d5b6b1       dashboard-metrics-scraper-6ffb444bf9-87c6b             kubernetes-dashboard
	234dc7879d1d2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago      Running             kubernetes-dashboard        0                   4b12a0d16b24f       kubernetes-dashboard-855c9754f9-2sjcl                  kubernetes-dashboard
	e61219fe02447       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   c66db85941c83       busybox                                                default
	893aacc74e612       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   0cc8b08569b32       coredns-66bc5c9577-xtnvg                               kube-system
	1ee0cbd88ee07       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           52 seconds ago      Running             kindnet-cni                 1                   225855acf586d       kindnet-tnxxl                                          kube-system
	69c3f005ed94b       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                           52 seconds ago      Running             kube-proxy                  1                   ff40123cab014       kube-proxy-j6fr2                                       kube-system
	ed7b1ae749750       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   1eb235cea5e35       storage-provisioner                                    kube-system
	905af8f5658bb       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                           58 seconds ago      Running             kube-controller-manager     1                   7cf8d8bbfa6d0       kube-controller-manager-default-k8s-diff-port-733235   kube-system
	be7fb25ee1383       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                           58 seconds ago      Running             kube-apiserver              1                   8b788fafb4247       kube-apiserver-default-k8s-diff-port-733235            kube-system
	d67aa42f388cd       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                           58 seconds ago      Running             kube-scheduler              1                   9204146104e66       kube-scheduler-default-k8s-diff-port-733235            kube-system
	0c4e1557aaf4b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           58 seconds ago      Running             etcd                        1                   95ebeec67c2dd       etcd-default-k8s-diff-port-733235                      kube-system
	
	
	==> coredns [893aacc74e612f41dc6362e747257f3761995b16d65649fdccd3803289f042d2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52498 - 62345 "HINFO IN 365848151036176430.3163089285270450224. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014286674s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-733235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-733235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=default-k8s-diff-port-733235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_22T01_36_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Dec 2025 01:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-733235
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Dec 2025 01:38:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Dec 2025 01:38:07 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Dec 2025 01:38:07 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Dec 2025 01:38:07 +0000   Mon, 22 Dec 2025 01:36:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Dec 2025 01:38:07 +0000   Mon, 22 Dec 2025 01:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-733235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 27e26b6886fb33f4c9687edc6946608f
	  System UUID:                6fde222b-2b74-47ec-8d2f-d10c695b4327
	  Boot ID:                    5d991a02-c1ba-4924-be4b-321625b455e6
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-xtnvg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-default-k8s-diff-port-733235                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         111s
	  kube-system                 kindnet-tnxxl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-733235             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-733235    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-j6fr2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-733235             100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-87c6b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2sjcl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 105s                 kube-proxy       
	  Normal   Starting                 52s                  kube-proxy       
	  Normal   Starting                 119s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  118s (x8 over 119s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    118s (x8 over 119s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s (x8 over 119s)  kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    111s                 kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 111s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  111s                 kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     111s                 kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientPID
	  Normal   Starting                 111s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           107s                 node-controller  Node default-k8s-diff-port-733235 event: Registered Node default-k8s-diff-port-733235 in Controller
	  Normal   NodeReady                93s                  kubelet          Node default-k8s-diff-port-733235 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-733235 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node default-k8s-diff-port-733235 event: Registered Node default-k8s-diff-port-733235 in Controller
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0c4e1557aaf4ba6f8dd79484c5a6fdd993697b4d3a48944d92da7868590e39e8] <==
	{"level":"warn","ts":"2025-12-22T01:37:35.849576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.869838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.882181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.898393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.915001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.945646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.957550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:35.985120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.001773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.017038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.033219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.049631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.065220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.081525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.109559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.125563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.187906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.198494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.213878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.228832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.243377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.277869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.291539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.304830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-22T01:37:36.353965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32836","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:38:30 up 1 day,  8:20,  0 user,  load average: 1.54, 2.37, 2.34
	Linux default-k8s-diff-port-733235 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ee0cbd88ee07483ade1428ffe942035401854e6ac8f7f373724b892d15f0bf1] <==
	I1222 01:37:38.162553       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1222 01:37:38.224139       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1222 01:37:38.224288       1 main.go:148] setting mtu 1500 for CNI 
	I1222 01:37:38.224301       1 main.go:178] kindnetd IP family: "ipv4"
	I1222 01:37:38.224319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-22T01:37:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1222 01:37:38.442334       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1222 01:37:38.442408       1 controller.go:381] "Waiting for informer caches to sync"
	I1222 01:37:38.442441       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1222 01:37:38.443289       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1222 01:38:08.442486       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1222 01:38:08.443628       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1222 01:38:08.443695       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1222 01:38:08.443728       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1222 01:38:09.843554       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1222 01:38:09.843620       1 metrics.go:72] Registering metrics
	I1222 01:38:09.843680       1 controller.go:711] "Syncing nftables rules"
	I1222 01:38:18.442564       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:38:18.442700       1 main.go:301] handling current node
	I1222 01:38:28.449406       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1222 01:38:28.449438       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be7fb25ee13836e5779902fd877f2186e3cf9a31a21929cf6abc242f36e24232] <==
	I1222 01:37:37.162795       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1222 01:37:37.181271       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1222 01:37:37.183441       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1222 01:37:37.184989       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1222 01:37:37.187542       1 cache.go:39] Caches are synced for autoregister controller
	I1222 01:37:37.187822       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1222 01:37:37.189618       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1222 01:37:37.189652       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1222 01:37:37.189761       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1222 01:37:37.217904       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1222 01:37:37.217938       1 policy_source.go:240] refreshing policies
	E1222 01:37:37.247754       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1222 01:37:37.254878       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1222 01:37:37.264622       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1222 01:37:37.465893       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1222 01:37:37.894990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1222 01:37:38.188938       1 controller.go:667] quota admission added evaluator for: namespaces
	I1222 01:37:38.257295       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1222 01:37:38.308155       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1222 01:37:38.360809       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1222 01:37:38.585544       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.255.80"}
	I1222 01:37:38.607602       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.114.43"}
	I1222 01:37:40.430611       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1222 01:37:40.774915       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1222 01:37:40.824995       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [905af8f5658bbafff6fbdf5bb6452d014ca2090ce3f5e40e90cbd56785d1aad5] <==
	I1222 01:37:40.377625       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1222 01:37:40.383221       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1222 01:37:40.385446       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1222 01:37:40.397736       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1222 01:37:40.398960       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1222 01:37:40.404285       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:37:40.405275       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1222 01:37:40.405350       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1222 01:37:40.407731       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1222 01:37:40.411967       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1222 01:37:40.416426       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1222 01:37:40.417395       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1222 01:37:40.417452       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1222 01:37:40.417442       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1222 01:37:40.417504       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1222 01:37:40.417512       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1222 01:37:40.417631       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1222 01:37:40.417393       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1222 01:37:40.439166       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1222 01:37:40.439336       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1222 01:37:40.439397       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1222 01:37:40.439437       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1222 01:37:40.439465       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1222 01:37:40.441213       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1222 01:37:40.442098       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [69c3f005ed94b1ccf31b7a27d3116e7ae80725f8604454ff4ed94158879868d6] <==
	I1222 01:37:38.421199       1 server_linux.go:53] "Using iptables proxy"
	I1222 01:37:38.625594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1222 01:37:38.726478       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1222 01:37:38.726510       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1222 01:37:38.726591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1222 01:37:38.750311       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1222 01:37:38.750359       1 server_linux.go:132] "Using iptables Proxier"
	I1222 01:37:38.754006       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1222 01:37:38.754280       1 server.go:527] "Version info" version="v1.34.3"
	I1222 01:37:38.754296       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:37:38.755347       1 config.go:200] "Starting service config controller"
	I1222 01:37:38.755413       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1222 01:37:38.764300       1 config.go:106] "Starting endpoint slice config controller"
	I1222 01:37:38.769628       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1222 01:37:38.766560       1 config.go:403] "Starting serviceCIDR config controller"
	I1222 01:37:38.769744       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1222 01:37:38.765231       1 config.go:309] "Starting node config controller"
	I1222 01:37:38.769810       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1222 01:37:38.769838       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1222 01:37:38.855697       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1222 01:37:38.869838       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1222 01:37:38.869840       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d67aa42f388cdf6681fda3c9278147bee06c272bb914a742bf8d4661a93d37f4] <==
	I1222 01:37:35.012600       1 serving.go:386] Generated self-signed cert in-memory
	W1222 01:37:37.081676       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1222 01:37:37.081711       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1222 01:37:37.081721       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1222 01:37:37.081729       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1222 01:37:37.175611       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1222 01:37:37.175648       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1222 01:37:37.183539       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1222 01:37:37.184190       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:37:37.184228       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1222 01:37:37.184306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1222 01:37:37.284374       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:41.076166     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4x48\" (UniqueName: \"kubernetes.io/projected/b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5-kube-api-access-l4x48\") pod \"dashboard-metrics-scraper-6ffb444bf9-87c6b\" (UID: \"b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b"
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:41.076239     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-87c6b\" (UID: \"b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b"
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:41.076266     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/585268bd-d9fd-4358-8e2e-6805b573af31-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-2sjcl\" (UID: \"585268bd-d9fd-4358-8e2e-6805b573af31\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2sjcl"
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:41.076283     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585sx\" (UniqueName: \"kubernetes.io/projected/585268bd-d9fd-4358-8e2e-6805b573af31-kube-api-access-585sx\") pod \"kubernetes-dashboard-855c9754f9-2sjcl\" (UID: \"585268bd-d9fd-4358-8e2e-6805b573af31\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2sjcl"
	Dec 22 01:37:41 default-k8s-diff-port-733235 kubelet[783]: W1222 01:37:41.298178     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c78b7294c07a35fe54b510143fb97e3ac6c8bdc562f0d0c1749ff2f4b26f269/crio-4b12a0d16b24fcee9b4e92eadd34ce7041d4f73ef2779fade7c4ced54d366ffe WatchSource:0}: Error finding container 4b12a0d16b24fcee9b4e92eadd34ce7041d4f73ef2779fade7c4ced54d366ffe: Status 404 returned error can't find the container with id 4b12a0d16b24fcee9b4e92eadd34ce7041d4f73ef2779fade7c4ced54d366ffe
	Dec 22 01:37:42 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:42.173030     783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 22 01:37:46 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:46.512360     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2sjcl" podStartSLOduration=1.9223210339999999 podStartE2EDuration="6.51233861s" podCreationTimestamp="2025-12-22 01:37:40 +0000 UTC" firstStartedPulling="2025-12-22 01:37:41.302007599 +0000 UTC m=+10.088067226" lastFinishedPulling="2025-12-22 01:37:45.892025183 +0000 UTC m=+14.678084802" observedRunningTime="2025-12-22 01:37:46.511800633 +0000 UTC m=+15.297860268" watchObservedRunningTime="2025-12-22 01:37:46.51233861 +0000 UTC m=+15.298398229"
	Dec 22 01:37:50 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:50.500803     783 scope.go:117] "RemoveContainer" containerID="967f593b8b8062f920020cb551dfae37c8a3d07d41a047bb0d20920d706b7d4f"
	Dec 22 01:37:51 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:51.505552     783 scope.go:117] "RemoveContainer" containerID="967f593b8b8062f920020cb551dfae37c8a3d07d41a047bb0d20920d706b7d4f"
	Dec 22 01:37:51 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:51.506781     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:37:51 default-k8s-diff-port-733235 kubelet[783]: E1222 01:37:51.514619     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:37:52 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:52.509697     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:37:52 default-k8s-diff-port-733235 kubelet[783]: E1222 01:37:52.509863     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:37:55 default-k8s-diff-port-733235 kubelet[783]: I1222 01:37:55.974264     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:37:55 default-k8s-diff-port-733235 kubelet[783]: E1222 01:37:55.974505     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:38:06 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:06.374017     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:38:06 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:06.542234     783 scope.go:117] "RemoveContainer" containerID="615efcedfa484cb8b796d5534e3633b4370c52fdaa17fa64abfcb0ede9048f83"
	Dec 22 01:38:06 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:06.542520     783 scope.go:117] "RemoveContainer" containerID="c0036ba277be0d901db9cdb64b5a6e5e9b908f9c3bdb99057b6d375471fb4d12"
	Dec 22 01:38:06 default-k8s-diff-port-733235 kubelet[783]: E1222 01:38:06.544003     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:38:08 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:08.565819     783 scope.go:117] "RemoveContainer" containerID="ed7b1ae74975081543638e2810948f943f99e282f8acef543825b48f8f02bb31"
	Dec 22 01:38:15 default-k8s-diff-port-733235 kubelet[783]: I1222 01:38:15.973989     783 scope.go:117] "RemoveContainer" containerID="c0036ba277be0d901db9cdb64b5a6e5e9b908f9c3bdb99057b6d375471fb4d12"
	Dec 22 01:38:15 default-k8s-diff-port-733235 kubelet[783]: E1222 01:38:15.974185     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-87c6b_kubernetes-dashboard(b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-87c6b" podUID="b1b87daf-d8cb-4acd-8833-bc1b5c2e6ea5"
	Dec 22 01:38:25 default-k8s-diff-port-733235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 22 01:38:26 default-k8s-diff-port-733235 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 22 01:38:26 default-k8s-diff-port-733235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [234dc7879d1d2f4481efdf1307fa02e5e13fdc950270816a19c3e22111ef36f0] <==
	2025/12/22 01:37:45 Using namespace: kubernetes-dashboard
	2025/12/22 01:37:45 Using in-cluster config to connect to apiserver
	2025/12/22 01:37:45 Using secret token for csrf signing
	2025/12/22 01:37:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/22 01:37:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/22 01:37:45 Successful initial request to the apiserver, version: v1.34.3
	2025/12/22 01:37:45 Generating JWE encryption key
	2025/12/22 01:37:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/22 01:37:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/22 01:37:46 Initializing JWE encryption key from synchronized object
	2025/12/22 01:37:46 Creating in-cluster Sidecar client
	2025/12/22 01:37:46 Serving insecurely on HTTP port: 9090
	2025/12/22 01:37:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:38:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/22 01:37:45 Starting overwatch
	
	
	==> storage-provisioner [ed7b1ae74975081543638e2810948f943f99e282f8acef543825b48f8f02bb31] <==
	I1222 01:37:38.169877       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1222 01:38:08.259895       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f6d6f6396fe13981b1de896d8f1e0b240191b41b71e09228f8510898bb14bc01] <==
	I1222 01:38:08.627401       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1222 01:38:08.649718       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1222 01:38:08.649985       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1222 01:38:08.652464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:12.108154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:16.368895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:19.967205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:23.020688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:26.043019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:26.051462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:38:26.051613       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1222 01:38:26.051784       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733235_dece4b67-4a11-4d42-bfca-d38c6a57a5a1!
	I1222 01:38:26.052761       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11324e73-4990-4f66-8f18-92ad4f625ab0", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-733235_dece4b67-4a11-4d42-bfca-d38c6a57a5a1 became leader
	W1222 01:38:26.060598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:26.074333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1222 01:38:26.152813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733235_dece4b67-4a11-4d42-bfca-d38c6a57a5a1!
	W1222 01:38:28.076895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:28.083249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:30.087034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1222 01:38:30.092197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235: exit status 2 (341.560357ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-733235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (501.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1222 01:39:28.820276 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:39:47.126916 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:40:31.015402 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:40:47.971226 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:41:25.759340 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m19.524730987s)

                                                
                                                
-- stdout --
	* [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:38:34.723707 2134286 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:34.723823 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.723833 2134286 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:34.723839 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.724123 2134286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:38:34.724545 2134286 out.go:368] Setting JSON to false
	I1222 01:38:34.725446 2134286 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116464,"bootTime":1766251051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:38:34.725511 2134286 start.go:143] virtualization:  
	I1222 01:38:34.729663 2134286 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:34.733886 2134286 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:34.734036 2134286 notify.go:221] Checking for updates...
	I1222 01:38:34.740441 2134286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:34.743585 2134286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:38:34.746751 2134286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:38:34.749869 2134286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:34.752958 2134286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:34.756493 2134286 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:34.756622 2134286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:34.783091 2134286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:34.783353 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.839782 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.830338261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.839888 2134286 docker.go:319] overlay module found
	I1222 01:38:34.845569 2134286 out.go:179] * Using the docker driver based on user configuration
	I1222 01:38:34.848592 2134286 start.go:309] selected driver: docker
	I1222 01:38:34.848611 2134286 start.go:928] validating driver "docker" against <nil>
	I1222 01:38:34.848634 2134286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:34.849297 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.935540 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.926549754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.935703 2134286 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:38:34.935733 2134286 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:38:34.935959 2134286 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:34.938976 2134286 out.go:179] * Using Docker driver with root privileges
	I1222 01:38:34.941882 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:34.941953 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:34.941972 2134286 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:38:34.942050 2134286 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:34.945127 2134286 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:38:34.947951 2134286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:38:34.950705 2134286 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:34.953516 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:34.953561 2134286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:38:34.953574 2134286 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:34.953598 2134286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:34.953659 2134286 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:34.953669 2134286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:38:34.953790 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:34.953808 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json: {Name:mkbd82291dca772e741ca6a4965901d8562f02f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:34.972608 2134286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:34.972632 2134286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:34.972653 2134286 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:34.972687 2134286 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:34.972789 2134286 start.go:364] duration metric: took 81.072µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:38:34.972826 2134286 start.go:93] Provisioning new machine with config: &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:38:34.972897 2134286 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:38:34.976294 2134286 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:38:34.976517 2134286 start.go:159] libmachine.API.Create for "newest-cni-704217" (driver="docker")
	I1222 01:38:34.976560 2134286 client.go:173] LocalClient.Create starting
	I1222 01:38:34.976631 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:38:34.976666 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976687 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.976752 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:38:34.976774 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976789 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.977153 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:38:34.991933 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:38:34.992025 2134286 network_create.go:284] running [docker network inspect newest-cni-704217] to gather additional debugging logs...
	I1222 01:38:34.992051 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217
	W1222 01:38:35.009843 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 returned with exit code 1
	I1222 01:38:35.009879 2134286 network_create.go:287] error running [docker network inspect newest-cni-704217]: docker network inspect newest-cni-704217: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-704217 not found
	I1222 01:38:35.009894 2134286 network_create.go:289] output of [docker network inspect newest-cni-704217]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-704217 not found
	
	** /stderr **
	I1222 01:38:35.009998 2134286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:35.028311 2134286 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:38:35.028684 2134286 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:38:35.029053 2134286 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:38:35.029412 2134286 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 01:38:35.029864 2134286 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5ab0}
	I1222 01:38:35.029890 2134286 network_create.go:124] attempt to create docker network newest-cni-704217 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:38:35.029949 2134286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-704217 newest-cni-704217
	I1222 01:38:35.091875 2134286 network_create.go:108] docker network newest-cni-704217 192.168.85.0/24 created
	I1222 01:38:35.091908 2134286 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-704217" container
	I1222 01:38:35.091998 2134286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:38:35.109188 2134286 cli_runner.go:164] Run: docker volume create newest-cni-704217 --label name.minikube.sigs.k8s.io=newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:38:35.127123 2134286 oci.go:103] Successfully created a docker volume newest-cni-704217
	I1222 01:38:35.127220 2134286 cli_runner.go:164] Run: docker run --rm --name newest-cni-704217-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --entrypoint /usr/bin/test -v newest-cni-704217:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:38:35.650646 2134286 oci.go:107] Successfully prepared a docker volume newest-cni-704217
	I1222 01:38:35.650718 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:35.650728 2134286 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:38:35.650802 2134286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:38:39.466788 2134286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.81593333s)
	I1222 01:38:39.466823 2134286 kic.go:203] duration metric: took 3.816091086s to extract preloaded images to volume ...
	W1222 01:38:39.466970 2134286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:38:39.467078 2134286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:38:39.518928 2134286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-704217 --name newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-704217 --network newest-cni-704217 --ip 192.168.85.2 --volume newest-cni-704217:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:38:39.824489 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Running}}
	I1222 01:38:39.845273 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:39.868132 2134286 cli_runner.go:164] Run: docker exec newest-cni-704217 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:38:39.918857 2134286 oci.go:144] the created container "newest-cni-704217" has a running status.
	I1222 01:38:39.918885 2134286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa...
	I1222 01:38:40.057379 2134286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:38:40.081663 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.113212 2134286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:38:40.113247 2134286 kic_runner.go:114] Args: [docker exec --privileged newest-cni-704217 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:38:40.193617 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.220327 2134286 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:40.220433 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:40.250962 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:40.251306 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:40.251316 2134286 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:40.251882 2134286 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51396->127.0.0.1:38776: read: connection reset by peer
	I1222 01:38:43.385195 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.385223 2134286 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:38:43.385289 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.406922 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.407255 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.407271 2134286 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:38:43.550918 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.550996 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.569002 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.569357 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.569382 2134286 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:43.701601 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:43.701676 2134286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:38:43.701715 2134286 ubuntu.go:190] setting up certificates
	I1222 01:38:43.701726 2134286 provision.go:84] configureAuth start
	I1222 01:38:43.701795 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:43.718903 2134286 provision.go:143] copyHostCerts
	I1222 01:38:43.718967 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:38:43.718976 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:38:43.719054 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:38:43.719156 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:38:43.719162 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:38:43.719188 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:38:43.719246 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:38:43.719251 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:38:43.719274 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:38:43.719328 2134286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:38:43.977508 2134286 provision.go:177] copyRemoteCerts
	I1222 01:38:43.977583 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:43.977627 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.995727 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.093656 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:44.115417 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:44.140528 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:38:44.162225 2134286 provision.go:87] duration metric: took 460.475764ms to configureAuth
	I1222 01:38:44.162253 2134286 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:44.162450 2134286 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:44.162564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.180792 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:44.181103 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:44.181121 2134286 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:38:44.469915 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:38:44.469949 2134286 machine.go:97] duration metric: took 4.249594356s to provisionDockerMachine
	I1222 01:38:44.469959 2134286 client.go:176] duration metric: took 9.493388716s to LocalClient.Create
	I1222 01:38:44.469974 2134286 start.go:167] duration metric: took 9.493457957s to libmachine.API.Create "newest-cni-704217"
	I1222 01:38:44.469987 2134286 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:38:44.469998 2134286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:44.470072 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:44.470110 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.487566 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.585312 2134286 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:44.588561 2134286 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:44.588588 2134286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:44.588600 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:38:44.588655 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:38:44.588735 2134286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:38:44.588834 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:44.596094 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:44.613273 2134286 start.go:296] duration metric: took 143.27184ms for postStartSetup
	I1222 01:38:44.613679 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.630261 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:44.630541 2134286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:44.630589 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.647052 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.738630 2134286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:44.743420 2134286 start.go:128] duration metric: took 9.770509222s to createHost
	I1222 01:38:44.743497 2134286 start.go:83] releasing machines lock for "newest-cni-704217", held for 9.770692552s
	I1222 01:38:44.743588 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.760151 2134286 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:44.760209 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.760507 2134286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:44.760564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.782041 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.788656 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.973622 2134286 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:44.980619 2134286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:38:45.057102 2134286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:45.064439 2134286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:45.064557 2134286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:45.103917 2134286 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:38:45.103956 2134286 start.go:496] detecting cgroup driver to use...
	I1222 01:38:45.103997 2134286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:45.104069 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:38:45.126644 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:38:45.158107 2134286 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:45.158598 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:45.189553 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:45.237377 2134286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:45.372783 2134286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:45.495736 2134286 docker.go:234] disabling docker service ...
	I1222 01:38:45.495825 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:45.520536 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:45.534342 2134286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:45.667915 2134286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:45.795197 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:45.808465 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:45.822173 2134286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:38:45.822242 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.830599 2134286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:38:45.830673 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.839239 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.847549 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.856111 2134286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:45.863959 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.872612 2134286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.886086 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.895578 2134286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:45.903269 2134286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:45.910730 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.026286 2134286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:38:46.193052 2134286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:38:46.193121 2134286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:38:46.197063 2134286 start.go:564] Will wait 60s for crictl version
	I1222 01:38:46.197156 2134286 ssh_runner.go:195] Run: which crictl
	I1222 01:38:46.200956 2134286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:46.225282 2134286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:38:46.225459 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.257224 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.288579 2134286 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:38:46.291328 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:46.307792 2134286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:46.311726 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.324397 2134286 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:38:46.327309 2134286 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:46.327454 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:46.327539 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.376288 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.376313 2134286 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:38:46.376388 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.406533 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.406556 2134286 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:46.406564 2134286 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:38:46.406653 2134286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:46.406737 2134286 ssh_runner.go:195] Run: crio config
	I1222 01:38:46.477901 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:46.477922 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:46.477963 2134286 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:46.477999 2134286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:46.478139 2134286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:46.478220 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:46.486049 2134286 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:46.486137 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:46.493605 2134286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:38:46.506655 2134286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:46.520439 2134286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:38:46.533218 2134286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:46.537007 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.546792 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.662666 2134286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:46.679594 2134286 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:38:46.679659 2134286 certs.go:195] generating shared ca certs ...
	I1222 01:38:46.679688 2134286 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.679859 2134286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:38:46.679932 2134286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:38:46.679968 2134286 certs.go:257] generating profile certs ...
	I1222 01:38:46.680044 2134286 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:38:46.680085 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt with IP's: []
	I1222 01:38:46.841660 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt ...
	I1222 01:38:46.841690 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt: {Name:mk0a220848ae73520e360330c9832985737c86ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.841917 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key ...
	I1222 01:38:46.841933 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key: {Name:mk3fb1e07728181520997d7299a7d80c9afcd3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.842027 2134286 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:38:46.842046 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:38:46.952611 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 ...
	I1222 01:38:46.952645 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856: {Name:mk23d75e59a433395bcccffb2237600ddcfba975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952842 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 ...
	I1222 01:38:46.952855 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856: {Name:mke91b318f6e1a5e82c06575fddda672305e0a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952949 2134286 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt
	I1222 01:38:46.953039 2134286 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key
	I1222 01:38:46.953108 2134286 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:38:46.953127 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt with IP's: []
	I1222 01:38:47.135005 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt ...
	I1222 01:38:47.135039 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt: {Name:mk0bf95a48d358b9b6753ae55278429fc5cfa2f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135258 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key ...
	I1222 01:38:47.135281 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key: {Name:mk2786f86b96c0685cea4c15b093d231aba8608c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135484 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:38:47.135534 2134286 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:47.135550 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:38:47.135580 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:47.135609 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:47.135636 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:47.135686 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:47.137405 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:47.165131 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:47.186245 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:47.206617 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:47.225290 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:47.243359 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:38:47.261575 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:47.279279 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:47.296796 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:38:47.314798 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:38:47.332337 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:47.350465 2134286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:47.363436 2134286 ssh_runner.go:195] Run: openssl version
	I1222 01:38:47.370127 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.377515 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:47.384963 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388630 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388716 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.430642 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:47.438907 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:38:47.446439 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.453815 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:38:47.461441 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465092 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465189 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.506277 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:47.513878 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:38:47.521311 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.528445 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:38:47.535982 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539602 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539691 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.580405 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.587812 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.602092 2134286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:47.606351 2134286 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:38:47.606451 2134286 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:47.606588 2134286 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:47.606681 2134286 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:47.642649 2134286 cri.go:96] found id: ""
	I1222 01:38:47.642790 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:47.652615 2134286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:38:47.661497 2134286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:38:47.661607 2134286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:38:47.669765 2134286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:38:47.669786 2134286 kubeadm.go:158] found existing configuration files:
	
	I1222 01:38:47.669862 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:38:47.678062 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:38:47.678185 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:38:47.685567 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:38:47.693826 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:38:47.693987 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:38:47.702486 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.710049 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:38:47.710137 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.717426 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:38:47.724768 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:38:47.724863 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:38:47.731996 2134286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:38:47.851611 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:38:47.852034 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:38:47.938478 2134286 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:42:51.689259 2134286 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:42:51.689305 2134286 kubeadm.go:319] 
	I1222 01:42:51.689457 2134286 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:42:51.693607 2134286 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:42:51.693680 2134286 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:42:51.693786 2134286 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:42:51.693854 2134286 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:42:51.693896 2134286 kubeadm.go:319] OS: Linux
	I1222 01:42:51.693951 2134286 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:42:51.694009 2134286 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:42:51.694066 2134286 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:42:51.694122 2134286 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:42:51.694174 2134286 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:42:51.694237 2134286 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:42:51.694288 2134286 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:42:51.694342 2134286 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:42:51.694392 2134286 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:42:51.694474 2134286 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:42:51.694572 2134286 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:42:51.694670 2134286 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:42:51.694735 2134286 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:42:51.697624 2134286 out.go:252]   - Generating certificates and keys ...
	I1222 01:42:51.697718 2134286 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:42:51.697808 2134286 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:42:51.697915 2134286 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:42:51.697987 2134286 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:42:51.698058 2134286 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:42:51.698120 2134286 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:42:51.698181 2134286 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:42:51.698337 2134286 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:42:51.698415 2134286 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:42:51.698545 2134286 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:42:51.698615 2134286 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:42:51.698683 2134286 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:42:51.698736 2134286 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:42:51.698795 2134286 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:42:51.698852 2134286 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:42:51.698914 2134286 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:42:51.698974 2134286 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:42:51.699044 2134286 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:42:51.699102 2134286 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:42:51.699191 2134286 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:42:51.699266 2134286 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:42:51.702198 2134286 out.go:252]   - Booting up control plane ...
	I1222 01:42:51.702305 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:42:51.702393 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:42:51.702465 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:42:51.702605 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:42:51.702722 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:42:51.702859 2134286 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:42:51.702962 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:42:51.703031 2134286 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:42:51.703176 2134286 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:42:51.703289 2134286 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:42:51.703359 2134286 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001091621s
	I1222 01:42:51.703367 2134286 kubeadm.go:319] 
	I1222 01:42:51.703424 2134286 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:42:51.703471 2134286 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:42:51.703600 2134286 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:42:51.703609 2134286 kubeadm.go:319] 
	I1222 01:42:51.703741 2134286 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:42:51.703788 2134286 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:42:51.703827 2134286 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:42:51.703884 2134286 kubeadm.go:319] 
	W1222 01:42:51.703963 2134286 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001091621s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001091621s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:42:51.704042 2134286 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 01:42:52.116333 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:42:52.130279 2134286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:42:52.130395 2134286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:42:52.138956 2134286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:42:52.138973 2134286 kubeadm.go:158] found existing configuration files:
	
	I1222 01:42:52.139033 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:42:52.147315 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:42:52.147440 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:42:52.155563 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:42:52.163564 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:42:52.163636 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:42:52.170913 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:42:52.178471 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:42:52.178533 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:42:52.185865 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:42:52.193518 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:42:52.193583 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:42:52.201481 2134286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:42:52.237447 2134286 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:42:52.237777 2134286 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:42:52.304606 2134286 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:42:52.304761 2134286 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:42:52.304822 2134286 kubeadm.go:319] OS: Linux
	I1222 01:42:52.304898 2134286 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:42:52.305020 2134286 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:42:52.305102 2134286 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:42:52.305188 2134286 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:42:52.305283 2134286 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:42:52.305392 2134286 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:42:52.305467 2134286 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:42:52.305543 2134286 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:42:52.305618 2134286 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:42:52.385627 2134286 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:42:52.385746 2134286 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:42:52.385870 2134286 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:42:52.397770 2134286 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:42:52.403059 2134286 out.go:252]   - Generating certificates and keys ...
	I1222 01:42:52.403222 2134286 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:42:52.403321 2134286 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:42:52.403441 2134286 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:42:52.403535 2134286 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:42:52.403644 2134286 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:42:52.403731 2134286 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:42:52.403829 2134286 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:42:52.403931 2134286 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:42:52.404039 2134286 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:42:52.404145 2134286 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:42:52.404221 2134286 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:42:52.404305 2134286 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:42:52.526140 2134286 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:42:52.663215 2134286 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:42:53.123947 2134286 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:42:53.256891 2134286 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:42:53.565548 2134286 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:42:53.566081 2134286 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:42:53.568608 2134286 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:42:53.571917 2134286 out.go:252]   - Booting up control plane ...
	I1222 01:42:53.572079 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:42:53.572209 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:42:53.572309 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:42:53.586354 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:42:53.586471 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:42:53.594381 2134286 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:42:53.595983 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:42:53.596247 2134286 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:42:53.724884 2134286 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:42:53.725013 2134286 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:46:53.725590 2134286 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001122316s
	I1222 01:46:53.725621 2134286 kubeadm.go:319] 
	I1222 01:46:53.725676 2134286 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:46:53.725720 2134286 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:46:53.725879 2134286 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:46:53.725891 2134286 kubeadm.go:319] 
	I1222 01:46:53.725990 2134286 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:46:53.726023 2134286 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:46:53.726055 2134286 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:46:53.726063 2134286 kubeadm.go:319] 
	I1222 01:46:53.729947 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:46:53.730369 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:46:53.730481 2134286 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:46:53.730720 2134286 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:46:53.730733 2134286 kubeadm.go:319] 
	I1222 01:46:53.730802 2134286 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:46:53.730854 2134286 kubeadm.go:403] duration metric: took 8m6.124408914s to StartCluster
	I1222 01:46:53.730901 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:46:53.730968 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:46:53.759901 2134286 cri.go:96] found id: ""
	I1222 01:46:53.759933 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.759942 2134286 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:46:53.759948 2134286 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:46:53.760012 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:46:53.785167 2134286 cri.go:96] found id: ""
	I1222 01:46:53.785192 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.785201 2134286 logs.go:284] No container was found matching "etcd"
	I1222 01:46:53.785208 2134286 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:46:53.785267 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:46:53.810142 2134286 cri.go:96] found id: ""
	I1222 01:46:53.810168 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.810177 2134286 logs.go:284] No container was found matching "coredns"
	I1222 01:46:53.810184 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:46:53.810243 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:46:53.839873 2134286 cri.go:96] found id: ""
	I1222 01:46:53.839895 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.839903 2134286 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:46:53.839910 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:46:53.839968 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:46:53.874440 2134286 cri.go:96] found id: ""
	I1222 01:46:53.874526 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.874548 2134286 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:46:53.874593 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:46:53.874690 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:46:53.926286 2134286 cri.go:96] found id: ""
	I1222 01:46:53.926309 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.926318 2134286 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:46:53.926324 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:46:53.926384 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:46:53.961693 2134286 cri.go:96] found id: ""
	I1222 01:46:53.961717 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.961726 2134286 logs.go:284] No container was found matching "kindnet"
	I1222 01:46:53.961735 2134286 logs.go:123] Gathering logs for kubelet ...
	I1222 01:46:53.961747 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:46:54.027665 2134286 logs.go:123] Gathering logs for dmesg ...
	I1222 01:46:54.027701 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:46:54.044975 2134286 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:46:54.045007 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:46:54.111999 2134286 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:46:54.103598    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.104026    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.105730    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.106387    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.107891    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:46:54.103598    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.104026    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.105730    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.106387    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.107891    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:46:54.112026 2134286 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:46:54.112038 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:46:54.144880 2134286 logs.go:123] Gathering logs for container status ...
	I1222 01:46:54.144914 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:46:54.176206 2134286 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:46:54.176269 2134286 out.go:285] * 
	* 
	W1222 01:46:54.176328 2134286 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:46:54.176346 2134286 out.go:285] * 
	* 
	W1222 01:46:54.178676 2134286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:46:54.184460 2134286 out.go:203] 
	W1222 01:46:54.187291 2134286 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:46:54.187335 2134286 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:46:54.187358 2134286 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:46:54.190511 2134286 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-704217
helpers_test.go:244: (dbg) docker inspect newest-cni-704217:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	        "Created": "2025-12-22T01:38:39.533406818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2134718,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:38:39.609427135Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hosts",
	        "LogPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e-json.log",
	        "Name": "/newest-cni-704217",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-704217:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-704217",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	                "LowerDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-704217",
	                "Source": "/var/lib/docker/volumes/newest-cni-704217/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-704217",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-704217",
	                "name.minikube.sigs.k8s.io": "newest-cni-704217",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f1dc425bcc5666fe6d4a2f24c42949905dae75aa0fe97b26897c73e98810389",
	            "SandboxKey": "/var/run/docker/netns/8f1dc425bcc5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38776"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38777"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38780"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38778"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38779"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-704217": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:1f:25:28:ad:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37c5f2b876d48064d8ed7b8ba7f6e9a5833c911bae1041e13f11dbc7e56a4324",
	                    "EndpointID": "936fe31e2a62e192e6e8b7fb0777030ed5d6eddd56a85d9ab0417f54428bd2bc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-704217",
	                        "0d8468deabde"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217: exit status 6 (318.885579ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:46:54.581712 2146884 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-704217 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:41 UTC │                     │
	│ stop    │ -p no-preload-491054 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ addons  │ enable dashboard -p no-preload-491054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:43:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:43:30.250445 2142162 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:43:30.250555 2142162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:43:30.250568 2142162 out.go:374] Setting ErrFile to fd 2...
	I1222 01:43:30.250578 2142162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:43:30.250816 2142162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:43:30.251169 2142162 out.go:368] Setting JSON to false
	I1222 01:43:30.252022 2142162 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116760,"bootTime":1766251051,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:43:30.252085 2142162 start.go:143] virtualization:  
	I1222 01:43:30.256993 2142162 out.go:179] * [no-preload-491054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:43:30.260139 2142162 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:43:30.260239 2142162 notify.go:221] Checking for updates...
	I1222 01:43:30.265992 2142162 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:43:30.268823 2142162 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:30.271754 2142162 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:43:30.274654 2142162 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:43:30.277521 2142162 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:43:30.280920 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:30.281572 2142162 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:43:30.304144 2142162 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:43:30.304273 2142162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:43:30.380279 2142162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:43:30.370426413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:43:30.380380 2142162 docker.go:319] overlay module found
	I1222 01:43:30.383492 2142162 out.go:179] * Using the docker driver based on existing profile
	I1222 01:43:30.386252 2142162 start.go:309] selected driver: docker
	I1222 01:43:30.386270 2142162 start.go:928] validating driver "docker" against &{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:30.386379 2142162 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:43:30.387093 2142162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:43:30.443809 2142162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:43:30.434314894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:43:30.444139 2142162 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:43:30.444167 2142162 cni.go:84] Creating CNI manager for ""
	I1222 01:43:30.444230 2142162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:43:30.444275 2142162 start.go:353] cluster config:
	{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:30.449118 2142162 out.go:179] * Starting "no-preload-491054" primary control-plane node in "no-preload-491054" cluster
	I1222 01:43:30.451889 2142162 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:43:30.454738 2142162 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:43:30.457645 2142162 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:43:30.457719 2142162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:43:30.457790 2142162 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:43:30.458097 2142162 cache.go:107] acquiring lock: {Name:mkce8c58631c5f0d23bf68d20d333345f25cc3e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458179 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:43:30.458197 2142162 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.314µs
	I1222 01:43:30.458210 2142162 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:43:30.458227 2142162 cache.go:107] acquiring lock: {Name:mk4201be15d3fc5460d5a3841ee13ef2c95ea347 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458267 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:43:30.458277 2142162 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 52.495µs
	I1222 01:43:30.458284 2142162 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458304 2142162 cache.go:107] acquiring lock: {Name:mkf2e3bf6867cb15e485fef5b9e233e7559c99a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458336 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:43:30.458345 2142162 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 43.535µs
	I1222 01:43:30.458353 2142162 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458369 2142162 cache.go:107] acquiring lock: {Name:mkb8446b3e0ab7883b46b478905fd798f55f01e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458400 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:43:30.458409 2142162 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 41.172µs
	I1222 01:43:30.458449 2142162 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458470 2142162 cache.go:107] acquiring lock: {Name:mk77482801a8c57f7731abc5d9887216f12ba8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458530 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:43:30.458546 2142162 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 77.315µs
	I1222 01:43:30.458553 2142162 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458608 2142162 cache.go:107] acquiring lock: {Name:mk692f79fde95c3755a4408e63ab002578541c74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458687 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:43:30.458702 2142162 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 113.81µs
	I1222 01:43:30.458726 2142162 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:43:30.458740 2142162 cache.go:107] acquiring lock: {Name:mkd233ea519aa3a2daed97ba862dcfb34d3be39d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458851 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:43:30.458865 2142162 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 126.815µs
	I1222 01:43:30.458872 2142162 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:43:30.458914 2142162 cache.go:107] acquiring lock: {Name:mk309b91ac5e9b8dc5721727680b39f391c5db3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458983 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:43:30.458997 2142162 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 92.724µs
	I1222 01:43:30.459005 2142162 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:43:30.459012 2142162 cache.go:87] Successfully saved all images to host disk.
	I1222 01:43:30.477002 2142162 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:43:30.477024 2142162 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:43:30.477043 2142162 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:43:30.477072 2142162 start.go:360] acquireMachinesLock for no-preload-491054: {Name:mk963c2c890d85a550b13090cd7467d59e3caf6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.477129 2142162 start.go:364] duration metric: took 36.634µs to acquireMachinesLock for "no-preload-491054"
	I1222 01:43:30.477153 2142162 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:43:30.477163 2142162 fix.go:54] fixHost starting: 
	I1222 01:43:30.477492 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:30.493752 2142162 fix.go:112] recreateIfNeeded on no-preload-491054: state=Stopped err=<nil>
	W1222 01:43:30.493785 2142162 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:43:30.496979 2142162 out.go:252] * Restarting existing docker container for "no-preload-491054" ...
	I1222 01:43:30.497063 2142162 cli_runner.go:164] Run: docker start no-preload-491054
	I1222 01:43:30.758339 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:30.783924 2142162 kic.go:430] container "no-preload-491054" state is running.
	I1222 01:43:30.784895 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:30.806120 2142162 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:43:30.806345 2142162 machine.go:94] provisionDockerMachine start ...
	I1222 01:43:30.806411 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:30.836679 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:30.837006 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:30.837015 2142162 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:43:30.839411 2142162 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:43:33.973258 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:43:33.973282 2142162 ubuntu.go:182] provisioning hostname "no-preload-491054"
	I1222 01:43:33.973370 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:33.991448 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:33.991768 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:33.991785 2142162 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-491054 && echo "no-preload-491054" | sudo tee /etc/hostname
	I1222 01:43:34.134668 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:43:34.134773 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.153303 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:34.153657 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:34.153681 2142162 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491054/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:43:34.285755 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:43:34.285786 2142162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:43:34.285816 2142162 ubuntu.go:190] setting up certificates
	I1222 01:43:34.285832 2142162 provision.go:84] configureAuth start
	I1222 01:43:34.285896 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:34.304161 2142162 provision.go:143] copyHostCerts
	I1222 01:43:34.304238 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:43:34.304271 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:43:34.304354 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:43:34.304468 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:43:34.304480 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:43:34.304508 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:43:34.304576 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:43:34.304585 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:43:34.304610 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:43:34.304663 2142162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.no-preload-491054 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-491054]
	I1222 01:43:34.369088 2142162 provision.go:177] copyRemoteCerts
	I1222 01:43:34.369157 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:43:34.369200 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.387117 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:34.489125 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:43:34.507509 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:43:34.526028 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:43:34.544740 2142162 provision.go:87] duration metric: took 258.882924ms to configureAuth
	I1222 01:43:34.544808 2142162 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:43:34.545018 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:34.545137 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.562445 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:34.562759 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:34.562773 2142162 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:43:34.840774 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:43:34.840838 2142162 machine.go:97] duration metric: took 4.034478045s to provisionDockerMachine
	I1222 01:43:34.840864 2142162 start.go:293] postStartSetup for "no-preload-491054" (driver="docker")
	I1222 01:43:34.840891 2142162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:43:34.841014 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:43:34.841131 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.858548 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:34.957415 2142162 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:43:34.960727 2142162 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:43:34.960754 2142162 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:43:34.960765 2142162 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:43:34.960821 2142162 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:43:34.960908 2142162 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:43:34.961020 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:43:34.968556 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:43:34.986959 2142162 start.go:296] duration metric: took 146.06498ms for postStartSetup
	I1222 01:43:34.987050 2142162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:43:34.987089 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.005930 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.110222 2142162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:43:35.118762 2142162 fix.go:56] duration metric: took 4.641592572s for fixHost
	I1222 01:43:35.118836 2142162 start.go:83] releasing machines lock for "no-preload-491054", held for 4.641692655s
	I1222 01:43:35.118944 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:35.139078 2142162 ssh_runner.go:195] Run: cat /version.json
	I1222 01:43:35.139134 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.139394 2142162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:43:35.139451 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.169405 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.178990 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.269046 2142162 ssh_runner.go:195] Run: systemctl --version
	I1222 01:43:35.364120 2142162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:43:35.400545 2142162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:43:35.405009 2142162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:43:35.405089 2142162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:43:35.413041 2142162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:43:35.413064 2142162 start.go:496] detecting cgroup driver to use...
	I1222 01:43:35.413113 2142162 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:43:35.413189 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:43:35.428420 2142162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:43:35.441740 2142162 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:43:35.441807 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:43:35.456961 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:43:35.470063 2142162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:43:35.582136 2142162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:43:35.708479 2142162 docker.go:234] disabling docker service ...
	I1222 01:43:35.708551 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:43:35.723755 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:43:35.736977 2142162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:43:35.868719 2142162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:43:35.995326 2142162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:43:36.015703 2142162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:43:36.031200 2142162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:43:36.031291 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.040788 2142162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:43:36.040868 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.050288 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.059682 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.069142 2142162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:43:36.078174 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.088084 2142162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.097092 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.106783 2142162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:43:36.114759 2142162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:43:36.122164 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:36.245950 2142162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:43:36.431807 2142162 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:43:36.431892 2142162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:43:36.435793 2142162 start.go:564] Will wait 60s for crictl version
	I1222 01:43:36.435862 2142162 ssh_runner.go:195] Run: which crictl
	I1222 01:43:36.439341 2142162 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:43:36.467888 2142162 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:43:36.467972 2142162 ssh_runner.go:195] Run: crio --version
	I1222 01:43:36.498281 2142162 ssh_runner.go:195] Run: crio --version
	I1222 01:43:36.532998 2142162 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:43:36.535843 2142162 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:43:36.552071 2142162 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:43:36.555936 2142162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:43:36.565145 2142162 kubeadm.go:884] updating cluster {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:43:36.565259 2142162 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:43:36.565301 2142162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:43:36.609695 2142162 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:43:36.609715 2142162 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:43:36.609722 2142162 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:43:36.609826 2142162 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:43:36.609901 2142162 ssh_runner.go:195] Run: crio config
	I1222 01:43:36.685397 2142162 cni.go:84] Creating CNI manager for ""
	I1222 01:43:36.685422 2142162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:43:36.685444 2142162 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:43:36.685471 2142162 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491054 NodeName:no-preload-491054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:43:36.685595 2142162 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491054"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:43:36.685674 2142162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:43:36.693362 2142162 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:43:36.693431 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:43:36.700893 2142162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:43:36.713423 2142162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:43:36.726038 2142162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 01:43:36.738412 2142162 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:43:36.741844 2142162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:43:36.751009 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:36.872344 2142162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:43:36.895746 2142162 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054 for IP: 192.168.76.2
	I1222 01:43:36.895766 2142162 certs.go:195] generating shared ca certs ...
	I1222 01:43:36.895796 2142162 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:36.895947 2142162 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:43:36.895998 2142162 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:43:36.896010 2142162 certs.go:257] generating profile certs ...
	I1222 01:43:36.896128 2142162 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key
	I1222 01:43:36.896216 2142162 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a
	I1222 01:43:36.896270 2142162 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key
	I1222 01:43:36.896403 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:43:36.896439 2142162 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:43:36.896451 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:43:36.896492 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:43:36.896520 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:43:36.896557 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:43:36.896612 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:43:36.901156 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:43:36.921532 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:43:36.938876 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:43:36.955891 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:43:36.973839 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:43:36.990503 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:43:37.012123 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:43:37.035419 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:43:37.054551 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:43:37.071987 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:43:37.089400 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:43:37.106223 2142162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:43:37.118959 2142162 ssh_runner.go:195] Run: openssl version
	I1222 01:43:37.125211 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.132736 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:43:37.140145 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.143651 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.143715 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.184794 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:43:37.192255 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.199933 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:43:37.207517 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.211242 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.211312 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.252148 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:43:37.259423 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.266494 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:43:37.274006 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.277730 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.277799 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.318687 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:43:37.326275 2142162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:43:37.330267 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:43:37.371711 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:43:37.412636 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:43:37.455494 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:43:37.496495 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:43:37.537537 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:43:37.578877 2142162 kubeadm.go:401] StartCluster: {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:37.578975 2142162 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:43:37.579049 2142162 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:43:37.606764 2142162 cri.go:96] found id: ""
	I1222 01:43:37.606851 2142162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:43:37.614979 2142162 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:43:37.615050 2142162 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:43:37.615145 2142162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:43:37.622476 2142162 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:43:37.622933 2142162 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:37.623103 2142162 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-491054" cluster setting kubeconfig missing "no-preload-491054" context setting]
	I1222 01:43:37.623423 2142162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.624922 2142162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:43:37.632919 2142162 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:43:37.632949 2142162 kubeadm.go:602] duration metric: took 17.880466ms to restartPrimaryControlPlane
	I1222 01:43:37.632959 2142162 kubeadm.go:403] duration metric: took 54.094028ms to StartCluster
	I1222 01:43:37.632974 2142162 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.633031 2142162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:37.633700 2142162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.633896 2142162 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:43:37.634282 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:37.634245 2142162 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:43:37.634339 2142162 addons.go:70] Setting storage-provisioner=true in profile "no-preload-491054"
	I1222 01:43:37.634361 2142162 addons.go:239] Setting addon storage-provisioner=true in "no-preload-491054"
	I1222 01:43:37.634393 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.634429 2142162 addons.go:70] Setting dashboard=true in profile "no-preload-491054"
	I1222 01:43:37.634454 2142162 addons.go:239] Setting addon dashboard=true in "no-preload-491054"
	W1222 01:43:37.634473 2142162 addons.go:248] addon dashboard should already be in state true
	I1222 01:43:37.634514 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.634848 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.635327 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.636958 2142162 addons.go:70] Setting default-storageclass=true in profile "no-preload-491054"
	I1222 01:43:37.636991 2142162 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-491054"
	I1222 01:43:37.637296 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.637634 2142162 out.go:179] * Verifying Kubernetes components...
	I1222 01:43:37.640709 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:37.689115 2142162 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:43:37.692012 2142162 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:43:37.693731 2142162 addons.go:239] Setting addon default-storageclass=true in "no-preload-491054"
	I1222 01:43:37.693777 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.694198 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.694799 2142162 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:43:37.694913 2142162 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:37.694931 2142162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:43:37.694979 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.697616 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:43:37.697665 2142162 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:43:37.697851 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.735798 2142162 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:37.735820 2142162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:43:37.735884 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.742160 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.745661 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.771276 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.878826 2142162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:43:37.905903 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:43:37.905939 2142162 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:43:37.908326 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:37.924424 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:37.931427 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:43:37.931493 2142162 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:43:37.987971 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:43:37.988038 2142162 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:43:38.013982 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:43:38.014064 2142162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:43:38.030075 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:43:38.030106 2142162 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:43:38.044755 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:43:38.044781 2142162 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:43:38.060102 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:43:38.060127 2142162 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:43:38.074803 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:43:38.074827 2142162 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:43:38.089429 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:38.089492 2142162 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:43:38.107550 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:38.596163 2142162 node_ready.go:35] waiting up to 6m0s for node "no-preload-491054" to be "Ready" ...
	W1222 01:43:38.596533 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:38.596585 2142162 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.596646 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.596869 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:38.842397 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:38.893925 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:38.919126 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:38.920556 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.990827 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.996043 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.180030 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:39.243487 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.507921 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:39.546361 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:39.585095 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:39.637531 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.923912 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:39.969528 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:39.984506 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:40.021662 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:40.047521 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:40.093478 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:40.597265 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:40.934859 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:40.995829 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:40.998914 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:41.002415 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:41.074990 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:41.086742 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:41.721168 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:41.778438 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:41.812650 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:41.884140 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:42.092665 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:42.179951 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:43.096812 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:43.264030 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:43.326256 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:43.941488 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:44.001292 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:44.307335 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:44.364551 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:45.097718 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:45.303907 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:45.386285 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:46.276008 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:46.335276 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:47.596694 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:48.343243 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:48.427142 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:49.596755 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:50.701128 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:50.740652 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:50.774874 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:50.809842 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:51.233104 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:51.289972 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:52.096678 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:43:54.596681 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:55.496576 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:55.556091 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:55.556130 2142162 retry.go:84] will retry after 5.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:57.097745 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:57.830249 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:57.896591 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:59.048455 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:59.117670 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:59.596625 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:00.689639 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:00.748079 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:01.596784 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:04.096680 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:06.096842 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:07.740464 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:07.801397 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:07.801438 2142162 retry.go:84] will retry after 14.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:08.406997 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:08.464207 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:08.596682 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:08.637968 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:08.697473 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:10.596802 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:13.096695 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:15.097712 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:17.597645 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:20.096640 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:22.096730 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:22.136999 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:22.201886 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:22.201932 2142162 retry.go:84] will retry after 31.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:24.596746 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:27.096668 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:28.713976 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:28.773383 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:28.773420 2142162 retry.go:84] will retry after 43.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:29.097212 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:29.449928 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:29.512969 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:31.596698 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:33.597657 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:36.096782 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:38.096835 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:40.596871 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:42.597417 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:44.597559 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:47.097409 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:49.596695 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:50.536518 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:50.603160 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:50.603196 2142162 retry.go:84] will retry after 32.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:51.597438 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:53.597561 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:53.885143 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:53.945074 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:53.945111 2142162 retry.go:84] will retry after 39.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:56.096872 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:58.596737 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:00.597427 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:03.096930 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:05.596787 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:08.096924 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:10.097654 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:12.224247 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:45:12.280984 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:12.281100 2142162 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:45:12.596649 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:14.596729 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:16.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:18.596938 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:21.096848 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:23.505160 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:45:23.571151 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:23.571268 2142162 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:45:23.597779 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:26.096739 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:28.096845 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:30.096928 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:32.596762 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:33.183034 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:45:33.241129 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:33.241240 2142162 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:45:33.244397 2142162 out.go:179] * Enabled addons: 
	I1222 01:45:33.247144 2142162 addons.go:530] duration metric: took 1m55.612898686s for enable addons: enabled=[]
	W1222 01:45:34.596832 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:37.097200 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:39.596789 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:42.097919 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:44.596713 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:46.596836 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:49.097713 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:51.596858 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:54.096783 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:56.096929 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:58.596920 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:00.597362 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:03.096800 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:05.596821 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:08.096838 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:10.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:12.596879 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:15.097761 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:17.597722 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:20.096717 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:22.096938 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:24.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:26.596844 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:28.596907 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:30.597131 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:32.597642 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:35.097686 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:37.596814 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:40.097013 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:42.097822 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:44.596698 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:46.597195 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:48.597596 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:46:53.725590 2134286 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001122316s
	I1222 01:46:53.725621 2134286 kubeadm.go:319] 
	I1222 01:46:53.725676 2134286 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:46:53.725720 2134286 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:46:53.725879 2134286 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:46:53.725891 2134286 kubeadm.go:319] 
	I1222 01:46:53.725990 2134286 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:46:53.726023 2134286 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:46:53.726055 2134286 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:46:53.726063 2134286 kubeadm.go:319] 
	I1222 01:46:53.729947 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:46:53.730369 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:46:53.730481 2134286 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:46:53.730720 2134286 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:46:53.730733 2134286 kubeadm.go:319] 
	I1222 01:46:53.730802 2134286 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:46:53.730854 2134286 kubeadm.go:403] duration metric: took 8m6.124408914s to StartCluster
	I1222 01:46:53.730901 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:46:53.730968 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:46:53.759901 2134286 cri.go:96] found id: ""
	I1222 01:46:53.759933 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.759942 2134286 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:46:53.759948 2134286 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:46:53.760012 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:46:53.785167 2134286 cri.go:96] found id: ""
	I1222 01:46:53.785192 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.785201 2134286 logs.go:284] No container was found matching "etcd"
	I1222 01:46:53.785208 2134286 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:46:53.785267 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:46:53.810142 2134286 cri.go:96] found id: ""
	I1222 01:46:53.810168 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.810177 2134286 logs.go:284] No container was found matching "coredns"
	I1222 01:46:53.810184 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:46:53.810243 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:46:53.839873 2134286 cri.go:96] found id: ""
	I1222 01:46:53.839895 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.839903 2134286 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:46:53.839910 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:46:53.839968 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:46:53.874440 2134286 cri.go:96] found id: ""
	I1222 01:46:53.874526 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.874548 2134286 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:46:53.874593 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:46:53.874690 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:46:53.926286 2134286 cri.go:96] found id: ""
	I1222 01:46:53.926309 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.926318 2134286 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:46:53.926324 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:46:53.926384 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:46:53.961693 2134286 cri.go:96] found id: ""
	I1222 01:46:53.961717 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.961726 2134286 logs.go:284] No container was found matching "kindnet"
	I1222 01:46:53.961735 2134286 logs.go:123] Gathering logs for kubelet ...
	I1222 01:46:53.961747 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:46:54.027665 2134286 logs.go:123] Gathering logs for dmesg ...
	I1222 01:46:54.027701 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:46:54.044975 2134286 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:46:54.045007 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:46:54.111999 2134286 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:46:54.103598    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.104026    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.105730    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.106387    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.107891    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:46:54.103598    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.104026    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.105730    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.106387    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.107891    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:46:54.112026 2134286 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:46:54.112038 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:46:54.144880 2134286 logs.go:123] Gathering logs for container status ...
	I1222 01:46:54.144914 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:46:54.176206 2134286 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:46:54.176269 2134286 out.go:285] * 
	W1222 01:46:54.176328 2134286 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:46:54.176346 2134286 out.go:285] * 
	W1222 01:46:54.178676 2134286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:46:54.184460 2134286 out.go:203] 
	W1222 01:46:54.187291 2134286 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:46:54.187335 2134286 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:46:54.187358 2134286 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:46:54.190511 2134286 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186476851Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186517489Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186556651Z" level=info msg="Create NRI interface"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186675893Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186684803Z" level=info msg="runtime interface created"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.18669798Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186704995Z" level=info msg="runtime interface starting up..."
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186711428Z" level=info msg="starting plugins..."
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186735969Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186808861Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:38:46 newest-cni-704217 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.941801178Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=363e5130-37e7-4794-9a64-87db5602a3b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.942603353Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=01966ee5-85dd-4af2-a16a-e2904ef39b35 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.943190781Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=d1cc02a5-997f-47e2-bf1e-f6898b035ae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.943676001Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=07eb6416-cb40-428d-b4e2-6e11da6ff396 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.944098356Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=27d83e05-afc4-4f4d-9c18-9d6d73e88161 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.94456468Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4c23eaab-b844-4e55-8d5a-8b5f6c5eecc0 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.945065752Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c7c28c5c-4525-4455-b143-1cb99fff7891 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.388594528Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=d22a8ef1-a27f-4748-8413-486dc5c014b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.389478407Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=c9b2b1e7-6b2f-4965-9da6-e9e955d42f2b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.390057606Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=dbb891b5-77d5-46d1-a8e0-9d7a1d8e94a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.390520107Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=e02a5d41-e3c3-4923-910e-677d823dc19d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.393055956Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=9e548051-e2c2-4223-9635-9e7890b00ab4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.393806801Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cbc0faec-b0b9-4ab6-9d68-74de207b6122 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.394351244Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8d4054a3-5da7-48ed-abef-cdceb6553ab0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:46:55.258928    5103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:55.259596    5103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:55.261304    5103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:55.261819    5103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:55.263272    5103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:46:55 up 1 day,  8:29,  0 user,  load average: 1.29, 0.92, 1.57
	Linux newest-cni-704217 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:46:52 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:46:53 newest-cni-704217 kubelet[4904]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:46:53 newest-cni-704217 kubelet[4904]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:46:53 newest-cni-704217 kubelet[4904]: E1222 01:46:53.152403    4904 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:46:53 newest-cni-704217 kubelet[4951]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:46:53 newest-cni-704217 kubelet[4951]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:46:53 newest-cni-704217 kubelet[4951]: E1222 01:46:53.946337    4951 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:46:53 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:46:54 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 22 01:46:54 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:46:54 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:46:54 newest-cni-704217 kubelet[5015]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:46:54 newest-cni-704217 kubelet[5015]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:46:54 newest-cni-704217 kubelet[5015]: E1222 01:46:54.677992    5015 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:46:54 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:46:54 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217: exit status 6 (297.51525ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:46:55.760945 2147119 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-704217" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (501.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-491054 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-491054 create -f testdata/busybox.yaml: exit status 1 (53.90535ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-491054" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-491054 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-491054
helpers_test.go:244: (dbg) docker inspect no-preload-491054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	        "Created": "2025-12-22T01:32:55.262645623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2114161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:32:55.338008768Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hostname",
	        "HostsPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hosts",
	        "LogPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe-json.log",
	        "Name": "/no-preload-491054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	                "LowerDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-491054",
	                "Source": "/var/lib/docker/volumes/no-preload-491054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491054",
	                "name.minikube.sigs.k8s.io": "no-preload-491054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0028d366ff0985d5277d2d2f90b7ddf8e366e82e83e7670b234adfbab8a808ab",
	            "SandboxKey": "/var/run/docker/netns/0028d366ff09",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38751"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38752"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38755"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38753"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38754"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:04:e9:7c:da:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19fc401d8662d0cf8941c7bbec5156d3018f76638db0a4459b76764a03803463",
	                    "EndpointID": "31fb2fe9d31e4ac100d348896a6811cbe2d47df53862df096768f216d2449c34",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491054",
	                        "313e04d6776e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054: exit status 6 (293.978256ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:41:29.583472 2139068 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491054 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:38:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:38:34.723707 2134286 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:34.723823 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.723833 2134286 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:34.723839 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.724123 2134286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:38:34.724545 2134286 out.go:368] Setting JSON to false
	I1222 01:38:34.725446 2134286 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116464,"bootTime":1766251051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:38:34.725511 2134286 start.go:143] virtualization:  
	I1222 01:38:34.729663 2134286 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:34.733886 2134286 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:34.734036 2134286 notify.go:221] Checking for updates...
	I1222 01:38:34.740441 2134286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:34.743585 2134286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:38:34.746751 2134286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:38:34.749869 2134286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:34.752958 2134286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:34.756493 2134286 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:34.756622 2134286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:34.783091 2134286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:34.783353 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.839782 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.830338261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.839888 2134286 docker.go:319] overlay module found
	I1222 01:38:34.845569 2134286 out.go:179] * Using the docker driver based on user configuration
	I1222 01:38:34.848592 2134286 start.go:309] selected driver: docker
	I1222 01:38:34.848611 2134286 start.go:928] validating driver "docker" against <nil>
	I1222 01:38:34.848634 2134286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:34.849297 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.935540 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.926549754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.935703 2134286 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:38:34.935733 2134286 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:38:34.935959 2134286 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:34.938976 2134286 out.go:179] * Using Docker driver with root privileges
	I1222 01:38:34.941882 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:34.941953 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:34.941972 2134286 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:38:34.942050 2134286 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:34.945127 2134286 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:38:34.947951 2134286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:38:34.950705 2134286 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:34.953516 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:34.953561 2134286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:38:34.953574 2134286 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:34.953598 2134286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:34.953659 2134286 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:34.953669 2134286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:38:34.953790 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:34.953808 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json: {Name:mkbd82291dca772e741ca6a4965901d8562f02f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:34.972608 2134286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:34.972632 2134286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:34.972653 2134286 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:34.972687 2134286 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:34.972789 2134286 start.go:364] duration metric: took 81.072µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:38:34.972826 2134286 start.go:93] Provisioning new machine with config: &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:38:34.972897 2134286 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:38:34.976294 2134286 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:38:34.976517 2134286 start.go:159] libmachine.API.Create for "newest-cni-704217" (driver="docker")
	I1222 01:38:34.976560 2134286 client.go:173] LocalClient.Create starting
	I1222 01:38:34.976631 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:38:34.976666 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976687 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.976752 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:38:34.976774 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976789 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.977153 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:38:34.991933 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:38:34.992025 2134286 network_create.go:284] running [docker network inspect newest-cni-704217] to gather additional debugging logs...
	I1222 01:38:34.992051 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217
	W1222 01:38:35.009843 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 returned with exit code 1
	I1222 01:38:35.009879 2134286 network_create.go:287] error running [docker network inspect newest-cni-704217]: docker network inspect newest-cni-704217: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-704217 not found
	I1222 01:38:35.009894 2134286 network_create.go:289] output of [docker network inspect newest-cni-704217]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-704217 not found
	
	** /stderr **
	I1222 01:38:35.009998 2134286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:35.028311 2134286 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:38:35.028684 2134286 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:38:35.029053 2134286 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:38:35.029412 2134286 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 01:38:35.029864 2134286 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5ab0}
	I1222 01:38:35.029890 2134286 network_create.go:124] attempt to create docker network newest-cni-704217 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:38:35.029949 2134286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-704217 newest-cni-704217
	I1222 01:38:35.091875 2134286 network_create.go:108] docker network newest-cni-704217 192.168.85.0/24 created
	I1222 01:38:35.091908 2134286 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-704217" container
	I1222 01:38:35.091998 2134286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:38:35.109188 2134286 cli_runner.go:164] Run: docker volume create newest-cni-704217 --label name.minikube.sigs.k8s.io=newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:38:35.127123 2134286 oci.go:103] Successfully created a docker volume newest-cni-704217
	I1222 01:38:35.127220 2134286 cli_runner.go:164] Run: docker run --rm --name newest-cni-704217-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --entrypoint /usr/bin/test -v newest-cni-704217:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:38:35.650646 2134286 oci.go:107] Successfully prepared a docker volume newest-cni-704217
	I1222 01:38:35.650718 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:35.650728 2134286 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:38:35.650802 2134286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:38:39.466788 2134286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.81593333s)
	I1222 01:38:39.466823 2134286 kic.go:203] duration metric: took 3.816091086s to extract preloaded images to volume ...
	W1222 01:38:39.466970 2134286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:38:39.467078 2134286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:38:39.518928 2134286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-704217 --name newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-704217 --network newest-cni-704217 --ip 192.168.85.2 --volume newest-cni-704217:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:38:39.824489 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Running}}
	I1222 01:38:39.845273 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:39.868132 2134286 cli_runner.go:164] Run: docker exec newest-cni-704217 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:38:39.918857 2134286 oci.go:144] the created container "newest-cni-704217" has a running status.
	I1222 01:38:39.918885 2134286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa...
	I1222 01:38:40.057379 2134286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:38:40.081663 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.113212 2134286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:38:40.113247 2134286 kic_runner.go:114] Args: [docker exec --privileged newest-cni-704217 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:38:40.193617 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.220327 2134286 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:40.220433 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:40.250962 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:40.251306 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:40.251316 2134286 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:40.251882 2134286 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51396->127.0.0.1:38776: read: connection reset by peer
	I1222 01:38:43.385195 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.385223 2134286 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:38:43.385289 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.406922 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.407255 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.407271 2134286 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:38:43.550918 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.550996 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.569002 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.569357 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.569382 2134286 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:43.701601 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:43.701676 2134286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:38:43.701715 2134286 ubuntu.go:190] setting up certificates
	I1222 01:38:43.701726 2134286 provision.go:84] configureAuth start
	I1222 01:38:43.701795 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:43.718903 2134286 provision.go:143] copyHostCerts
	I1222 01:38:43.718967 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:38:43.718976 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:38:43.719054 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:38:43.719156 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:38:43.719162 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:38:43.719188 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:38:43.719246 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:38:43.719251 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:38:43.719274 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:38:43.719328 2134286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:38:43.977508 2134286 provision.go:177] copyRemoteCerts
	I1222 01:38:43.977583 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:43.977627 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.995727 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.093656 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:44.115417 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:44.140528 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:38:44.162225 2134286 provision.go:87] duration metric: took 460.475764ms to configureAuth
	I1222 01:38:44.162253 2134286 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:44.162450 2134286 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:44.162564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.180792 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:44.181103 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:44.181121 2134286 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:38:44.469915 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:38:44.469949 2134286 machine.go:97] duration metric: took 4.249594356s to provisionDockerMachine
	I1222 01:38:44.469959 2134286 client.go:176] duration metric: took 9.493388716s to LocalClient.Create
	I1222 01:38:44.469974 2134286 start.go:167] duration metric: took 9.493457957s to libmachine.API.Create "newest-cni-704217"
	I1222 01:38:44.469987 2134286 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:38:44.469998 2134286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:44.470072 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:44.470110 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.487566 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.585312 2134286 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:44.588561 2134286 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:44.588588 2134286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:44.588600 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:38:44.588655 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:38:44.588735 2134286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:38:44.588834 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:44.596094 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:44.613273 2134286 start.go:296] duration metric: took 143.27184ms for postStartSetup
	I1222 01:38:44.613679 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.630261 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:44.630541 2134286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:44.630589 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.647052 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.738630 2134286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:44.743420 2134286 start.go:128] duration metric: took 9.770509222s to createHost
	I1222 01:38:44.743497 2134286 start.go:83] releasing machines lock for "newest-cni-704217", held for 9.770692552s
	I1222 01:38:44.743588 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.760151 2134286 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:44.760209 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.760507 2134286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:44.760564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.782041 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.788656 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.973622 2134286 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:44.980619 2134286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:38:45.057102 2134286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:45.064439 2134286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:45.064557 2134286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:45.103917 2134286 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:38:45.103956 2134286 start.go:496] detecting cgroup driver to use...
	I1222 01:38:45.103997 2134286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:45.104069 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:38:45.126644 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:38:45.158107 2134286 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:45.158598 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:45.189553 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:45.237377 2134286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:45.372783 2134286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:45.495736 2134286 docker.go:234] disabling docker service ...
	I1222 01:38:45.495825 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:45.520536 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:45.534342 2134286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:45.667915 2134286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:45.795197 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:45.808465 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:45.822173 2134286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:38:45.822242 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.830599 2134286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:38:45.830673 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.839239 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.847549 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.856111 2134286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:45.863959 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.872612 2134286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.886086 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.895578 2134286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:45.903269 2134286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:45.910730 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.026286 2134286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:38:46.193052 2134286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:38:46.193121 2134286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:38:46.197063 2134286 start.go:564] Will wait 60s for crictl version
	I1222 01:38:46.197156 2134286 ssh_runner.go:195] Run: which crictl
	I1222 01:38:46.200956 2134286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:46.225282 2134286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:38:46.225459 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.257224 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.288579 2134286 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:38:46.291328 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:46.307792 2134286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:46.311726 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.324397 2134286 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:38:46.327309 2134286 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:46.327454 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:46.327539 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.376288 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.376313 2134286 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:38:46.376388 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.406533 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.406556 2134286 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:46.406564 2134286 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:38:46.406653 2134286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:46.406737 2134286 ssh_runner.go:195] Run: crio config
	I1222 01:38:46.477901 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:46.477922 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:46.477963 2134286 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:46.477999 2134286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:46.478139 2134286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:46.478220 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:46.486049 2134286 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:46.486137 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:46.493605 2134286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:38:46.506655 2134286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:46.520439 2134286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:38:46.533218 2134286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:46.537007 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.546792 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.662666 2134286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:46.679594 2134286 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:38:46.679659 2134286 certs.go:195] generating shared ca certs ...
	I1222 01:38:46.679688 2134286 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.679859 2134286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:38:46.679932 2134286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:38:46.679968 2134286 certs.go:257] generating profile certs ...
	I1222 01:38:46.680044 2134286 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:38:46.680085 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt with IP's: []
	I1222 01:38:46.841660 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt ...
	I1222 01:38:46.841690 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt: {Name:mk0a220848ae73520e360330c9832985737c86ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.841917 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key ...
	I1222 01:38:46.841933 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key: {Name:mk3fb1e07728181520997d7299a7d80c9afcd3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.842027 2134286 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:38:46.842046 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:38:46.952611 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 ...
	I1222 01:38:46.952645 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856: {Name:mk23d75e59a433395bcccffb2237600ddcfba975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952842 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 ...
	I1222 01:38:46.952855 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856: {Name:mke91b318f6e1a5e82c06575fddda672305e0a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952949 2134286 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt
	I1222 01:38:46.953039 2134286 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key
	I1222 01:38:46.953108 2134286 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:38:46.953127 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt with IP's: []
	I1222 01:38:47.135005 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt ...
	I1222 01:38:47.135039 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt: {Name:mk0bf95a48d358b9b6753ae55278429fc5cfa2f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135258 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key ...
	I1222 01:38:47.135281 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key: {Name:mk2786f86b96c0685cea4c15b093d231aba8608c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135484 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:38:47.135534 2134286 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:47.135550 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:38:47.135580 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:47.135609 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:47.135636 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:47.135686 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:47.137405 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:47.165131 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:47.186245 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:47.206617 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:47.225290 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:47.243359 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:38:47.261575 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:47.279279 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:47.296796 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:38:47.314798 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:38:47.332337 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:47.350465 2134286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:47.363436 2134286 ssh_runner.go:195] Run: openssl version
	I1222 01:38:47.370127 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.377515 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:47.384963 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388630 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388716 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.430642 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:47.438907 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:38:47.446439 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.453815 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:38:47.461441 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465092 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465189 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.506277 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:47.513878 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:38:47.521311 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.528445 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:38:47.535982 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539602 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539691 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.580405 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.587812 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.602092 2134286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:47.606351 2134286 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:38:47.606451 2134286 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:47.606588 2134286 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:47.606681 2134286 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:47.642649 2134286 cri.go:96] found id: ""
	I1222 01:38:47.642790 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:47.652615 2134286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:38:47.661497 2134286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:38:47.661607 2134286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:38:47.669765 2134286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:38:47.669786 2134286 kubeadm.go:158] found existing configuration files:
	
	I1222 01:38:47.669862 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:38:47.678062 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:38:47.678185 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:38:47.685567 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:38:47.693826 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:38:47.693987 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:38:47.702486 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.710049 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:38:47.710137 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.717426 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:38:47.724768 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:38:47.724863 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:38:47.731996 2134286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:38:47.851611 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:38:47.852034 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:38:47.938478 2134286 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.168234 2113859 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000189279s
	I1222 01:41:27.168261 2113859 kubeadm.go:319] 
	I1222 01:41:27.168319 2113859 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:41:27.168358 2113859 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:41:27.168485 2113859 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:41:27.168497 2113859 kubeadm.go:319] 
	I1222 01:41:27.168608 2113859 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:41:27.168644 2113859 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:41:27.168684 2113859 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:41:27.168692 2113859 kubeadm.go:319] 
	I1222 01:41:27.172924 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:41:27.173370 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:41:27.173496 2113859 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.173763 2113859 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:41:27.173774 2113859 kubeadm.go:319] 
	I1222 01:41:27.173843 2113859 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:41:27.173900 2113859 kubeadm.go:403] duration metric: took 8m7.846004041s to StartCluster
	I1222 01:41:27.173948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:27.174010 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:27.198460 2113859 cri.go:96] found id: ""
	I1222 01:41:27.198503 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.198512 2113859 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:27.198519 2113859 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:41:27.198581 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:27.223806 2113859 cri.go:96] found id: ""
	I1222 01:41:27.223832 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.223840 2113859 logs.go:284] No container was found matching "etcd"
	I1222 01:41:27.223847 2113859 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:41:27.223904 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:27.250017 2113859 cri.go:96] found id: ""
	I1222 01:41:27.250042 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.250051 2113859 logs.go:284] No container was found matching "coredns"
	I1222 01:41:27.250068 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:27.250129 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:27.275909 2113859 cri.go:96] found id: ""
	I1222 01:41:27.275934 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.275942 2113859 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:27.275948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:27.276004 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:27.300182 2113859 cri.go:96] found id: ""
	I1222 01:41:27.300215 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.300225 2113859 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:27.300231 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:27.300293 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:27.324772 2113859 cri.go:96] found id: ""
	I1222 01:41:27.324797 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.324806 2113859 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:27.324813 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:27.324868 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:27.353970 2113859 cri.go:96] found id: ""
	I1222 01:41:27.354043 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.354058 2113859 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:27.354068 2113859 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:27.354081 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:27.422083 2113859 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:27.422119 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:27.438918 2113859 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:27.438950 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:27.507814 2113859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:27.507835 2113859 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:41:27.507846 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:41:27.550141 2113859 logs.go:123] Gathering logs for container status ...
	I1222 01:41:27.550178 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:27.578268 2113859 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:41:27.578345 2113859 out.go:285] * 
	W1222 01:41:27.578396 2113859 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.578413 2113859 out.go:285] * 
	W1222 01:41:27.580529 2113859 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:41:27.585764 2113859 out.go:203] 
	W1222 01:41:27.589556 2113859 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.589611 2113859 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:41:27.589635 2113859 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:41:27.592761 2113859 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 01:33:04 no-preload-491054 crio[840]: time="2025-12-22T01:33:04.665075722Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-rc.1 not found" id=dd506993-a528-4423-8950-b0587f89a1dd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:04 no-preload-491054 crio[840]: time="2025-12-22T01:33:04.665126248Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-rc.1 found" id=dd506993-a528-4423-8950-b0587f89a1dd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464435752Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464781145Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464820692Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.637248018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.63753477Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.637589932Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903489965Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903812647Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903864485Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.666376352Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=d373ec24-b32f-4bc7-b1ee-dbc429c84518 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.670731699Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=985533ce-73ee-492d-bec3-e3f7745742a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.672178252Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=4324ee88-2078-4e1d-ac8c-226946777419 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.673730164Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=94416a1e-c2e8-4002-b2df-c67290c0deda name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.675651066Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=26b90d19-1583-4402-8028-995cec1d8c95 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.678449012Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4b69cbcb-1564-462b-975a-a87b64ce4d87 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.680670114Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=7125d5bc-0162-450e-a7bb-265b2a72453d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.297315654Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=7a582268-ead4-4858-9bcb-880317cde6d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.301718861Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=758439ba-2257-42b2-b322-7eee997029e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.303295905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=6d1d7a92-5737-428e-8f96-ffefc25ceae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.305283578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ebe1fb9e-67e5-4ffb-b4bd-137d43ff137b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.306347021Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6bb0a1d2-344b-4b71-8ad1-df4251af2703 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.309548352Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3c577299-eacf-44a2-91b2-dea64d0df8f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.310598748Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c9f46b28-9cc1-401a-9feb-ff7f5b2f29b5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:30.246187    5834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:30.246752    5834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:30.248292    5834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:30.248898    5834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:30.250409    5834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:41:30 up 1 day,  8:23,  0 user,  load average: 0.50, 1.46, 1.98
	Linux no-preload-491054 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:41:27 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:28 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 22 01:41:28 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:28 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:28 no-preload-491054 kubelet[5624]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:28 no-preload-491054 kubelet[5624]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:28 no-preload-491054 kubelet[5624]: E1222 01:41:28.419804    5624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:28 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:28 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:29 no-preload-491054 kubelet[5729]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:29 no-preload-491054 kubelet[5729]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:29 no-preload-491054 kubelet[5729]: E1222 01:41:29.212969    5729 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:29 no-preload-491054 kubelet[5759]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:29 no-preload-491054 kubelet[5759]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:29 no-preload-491054 kubelet[5759]: E1222 01:41:29.924051    5759 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 6 (391.214006ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:41:30.744204 2139286 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-491054
helpers_test.go:244: (dbg) docker inspect no-preload-491054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	        "Created": "2025-12-22T01:32:55.262645623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2114161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:32:55.338008768Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hostname",
	        "HostsPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hosts",
	        "LogPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe-json.log",
	        "Name": "/no-preload-491054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	                "LowerDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-491054",
	                "Source": "/var/lib/docker/volumes/no-preload-491054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491054",
	                "name.minikube.sigs.k8s.io": "no-preload-491054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0028d366ff0985d5277d2d2f90b7ddf8e366e82e83e7670b234adfbab8a808ab",
	            "SandboxKey": "/var/run/docker/netns/0028d366ff09",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38751"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38752"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38755"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38753"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38754"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:04:e9:7c:da:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19fc401d8662d0cf8941c7bbec5156d3018f76638db0a4459b76764a03803463",
	                    "EndpointID": "31fb2fe9d31e4ac100d348896a6811cbe2d47df53862df096768f216d2449c34",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491054",
	                        "313e04d6776e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054: exit status 6 (315.81159ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:41:31.078902 2139373 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491054 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:32 UTC │                     │
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:38:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:38:34.723707 2134286 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:34.723823 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.723833 2134286 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:34.723839 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.724123 2134286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:38:34.724545 2134286 out.go:368] Setting JSON to false
	I1222 01:38:34.725446 2134286 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116464,"bootTime":1766251051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:38:34.725511 2134286 start.go:143] virtualization:  
	I1222 01:38:34.729663 2134286 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:34.733886 2134286 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:34.734036 2134286 notify.go:221] Checking for updates...
	I1222 01:38:34.740441 2134286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:34.743585 2134286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:38:34.746751 2134286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:38:34.749869 2134286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:34.752958 2134286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:34.756493 2134286 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:34.756622 2134286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:34.783091 2134286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:34.783353 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.839782 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.830338261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.839888 2134286 docker.go:319] overlay module found
	I1222 01:38:34.845569 2134286 out.go:179] * Using the docker driver based on user configuration
	I1222 01:38:34.848592 2134286 start.go:309] selected driver: docker
	I1222 01:38:34.848611 2134286 start.go:928] validating driver "docker" against <nil>
	I1222 01:38:34.848634 2134286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:34.849297 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.935540 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.926549754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.935703 2134286 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:38:34.935733 2134286 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:38:34.935959 2134286 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:34.938976 2134286 out.go:179] * Using Docker driver with root privileges
	I1222 01:38:34.941882 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:34.941953 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:34.941972 2134286 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:38:34.942050 2134286 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:34.945127 2134286 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:38:34.947951 2134286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:38:34.950705 2134286 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:34.953516 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:34.953561 2134286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:38:34.953574 2134286 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:34.953598 2134286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:34.953659 2134286 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:34.953669 2134286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:38:34.953790 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:34.953808 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json: {Name:mkbd82291dca772e741ca6a4965901d8562f02f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:34.972608 2134286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:34.972632 2134286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:34.972653 2134286 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:34.972687 2134286 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:34.972789 2134286 start.go:364] duration metric: took 81.072µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:38:34.972826 2134286 start.go:93] Provisioning new machine with config: &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:38:34.972897 2134286 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:38:34.976294 2134286 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:38:34.976517 2134286 start.go:159] libmachine.API.Create for "newest-cni-704217" (driver="docker")
	I1222 01:38:34.976560 2134286 client.go:173] LocalClient.Create starting
	I1222 01:38:34.976631 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:38:34.976666 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976687 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.976752 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:38:34.976774 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976789 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.977153 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:38:34.991933 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:38:34.992025 2134286 network_create.go:284] running [docker network inspect newest-cni-704217] to gather additional debugging logs...
	I1222 01:38:34.992051 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217
	W1222 01:38:35.009843 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 returned with exit code 1
	I1222 01:38:35.009879 2134286 network_create.go:287] error running [docker network inspect newest-cni-704217]: docker network inspect newest-cni-704217: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-704217 not found
	I1222 01:38:35.009894 2134286 network_create.go:289] output of [docker network inspect newest-cni-704217]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-704217 not found
	
	** /stderr **
	I1222 01:38:35.009998 2134286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:35.028311 2134286 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:38:35.028684 2134286 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:38:35.029053 2134286 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:38:35.029412 2134286 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 01:38:35.029864 2134286 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5ab0}
	I1222 01:38:35.029890 2134286 network_create.go:124] attempt to create docker network newest-cni-704217 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:38:35.029949 2134286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-704217 newest-cni-704217
	I1222 01:38:35.091875 2134286 network_create.go:108] docker network newest-cni-704217 192.168.85.0/24 created
	I1222 01:38:35.091908 2134286 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-704217" container
	I1222 01:38:35.091998 2134286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:38:35.109188 2134286 cli_runner.go:164] Run: docker volume create newest-cni-704217 --label name.minikube.sigs.k8s.io=newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:38:35.127123 2134286 oci.go:103] Successfully created a docker volume newest-cni-704217
	I1222 01:38:35.127220 2134286 cli_runner.go:164] Run: docker run --rm --name newest-cni-704217-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --entrypoint /usr/bin/test -v newest-cni-704217:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:38:35.650646 2134286 oci.go:107] Successfully prepared a docker volume newest-cni-704217
	I1222 01:38:35.650718 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:35.650728 2134286 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:38:35.650802 2134286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:38:39.466788 2134286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.81593333s)
	I1222 01:38:39.466823 2134286 kic.go:203] duration metric: took 3.816091086s to extract preloaded images to volume ...
	W1222 01:38:39.466970 2134286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:38:39.467078 2134286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:38:39.518928 2134286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-704217 --name newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-704217 --network newest-cni-704217 --ip 192.168.85.2 --volume newest-cni-704217:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:38:39.824489 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Running}}
	I1222 01:38:39.845273 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:39.868132 2134286 cli_runner.go:164] Run: docker exec newest-cni-704217 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:38:39.918857 2134286 oci.go:144] the created container "newest-cni-704217" has a running status.
	I1222 01:38:39.918885 2134286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa...
	I1222 01:38:40.057379 2134286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:38:40.081663 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.113212 2134286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:38:40.113247 2134286 kic_runner.go:114] Args: [docker exec --privileged newest-cni-704217 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:38:40.193617 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.220327 2134286 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:40.220433 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:40.250962 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:40.251306 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:40.251316 2134286 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:40.251882 2134286 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51396->127.0.0.1:38776: read: connection reset by peer
	I1222 01:38:43.385195 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.385223 2134286 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:38:43.385289 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.406922 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.407255 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.407271 2134286 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:38:43.550918 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.550996 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.569002 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.569357 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.569382 2134286 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:43.701601 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:43.701676 2134286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:38:43.701715 2134286 ubuntu.go:190] setting up certificates
	I1222 01:38:43.701726 2134286 provision.go:84] configureAuth start
	I1222 01:38:43.701795 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:43.718903 2134286 provision.go:143] copyHostCerts
	I1222 01:38:43.718967 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:38:43.718976 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:38:43.719054 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:38:43.719156 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:38:43.719162 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:38:43.719188 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:38:43.719246 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:38:43.719251 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:38:43.719274 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:38:43.719328 2134286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:38:43.977508 2134286 provision.go:177] copyRemoteCerts
	I1222 01:38:43.977583 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:43.977627 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.995727 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.093656 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:44.115417 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:44.140528 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:38:44.162225 2134286 provision.go:87] duration metric: took 460.475764ms to configureAuth
	I1222 01:38:44.162253 2134286 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:44.162450 2134286 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:44.162564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.180792 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:44.181103 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:44.181121 2134286 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:38:44.469915 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:38:44.469949 2134286 machine.go:97] duration metric: took 4.249594356s to provisionDockerMachine
	I1222 01:38:44.469959 2134286 client.go:176] duration metric: took 9.493388716s to LocalClient.Create
	I1222 01:38:44.469974 2134286 start.go:167] duration metric: took 9.493457957s to libmachine.API.Create "newest-cni-704217"
	I1222 01:38:44.469987 2134286 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:38:44.469998 2134286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:44.470072 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:44.470110 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.487566 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.585312 2134286 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:44.588561 2134286 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:44.588588 2134286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:44.588600 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:38:44.588655 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:38:44.588735 2134286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:38:44.588834 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:44.596094 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:44.613273 2134286 start.go:296] duration metric: took 143.27184ms for postStartSetup
	I1222 01:38:44.613679 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.630261 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:44.630541 2134286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:44.630589 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.647052 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.738630 2134286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:44.743420 2134286 start.go:128] duration metric: took 9.770509222s to createHost
	I1222 01:38:44.743497 2134286 start.go:83] releasing machines lock for "newest-cni-704217", held for 9.770692552s
	I1222 01:38:44.743588 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.760151 2134286 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:44.760209 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.760507 2134286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:44.760564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.782041 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.788656 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.973622 2134286 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:44.980619 2134286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:38:45.057102 2134286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:45.064439 2134286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:45.064557 2134286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:45.103917 2134286 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:38:45.103956 2134286 start.go:496] detecting cgroup driver to use...
	I1222 01:38:45.103997 2134286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:45.104069 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:38:45.126644 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:38:45.158107 2134286 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:45.158598 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:45.189553 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:45.237377 2134286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:45.372783 2134286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:45.495736 2134286 docker.go:234] disabling docker service ...
	I1222 01:38:45.495825 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:45.520536 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:45.534342 2134286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:45.667915 2134286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:45.795197 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:45.808465 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:45.822173 2134286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:38:45.822242 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.830599 2134286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:38:45.830673 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.839239 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.847549 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.856111 2134286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:45.863959 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.872612 2134286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.886086 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.895578 2134286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:45.903269 2134286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:45.910730 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.026286 2134286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:38:46.193052 2134286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:38:46.193121 2134286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:38:46.197063 2134286 start.go:564] Will wait 60s for crictl version
	I1222 01:38:46.197156 2134286 ssh_runner.go:195] Run: which crictl
	I1222 01:38:46.200956 2134286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:46.225282 2134286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:38:46.225459 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.257224 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.288579 2134286 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:38:46.291328 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:46.307792 2134286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:46.311726 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.324397 2134286 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:38:46.327309 2134286 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:46.327454 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:46.327539 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.376288 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.376313 2134286 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:38:46.376388 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.406533 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.406556 2134286 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:46.406564 2134286 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:38:46.406653 2134286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:46.406737 2134286 ssh_runner.go:195] Run: crio config
	I1222 01:38:46.477901 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:46.477922 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:46.477963 2134286 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:46.477999 2134286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:46.478139 2134286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:46.478220 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:46.486049 2134286 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:46.486137 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:46.493605 2134286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:38:46.506655 2134286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:46.520439 2134286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:38:46.533218 2134286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:46.537007 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.546792 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.662666 2134286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:46.679594 2134286 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:38:46.679659 2134286 certs.go:195] generating shared ca certs ...
	I1222 01:38:46.679688 2134286 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.679859 2134286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:38:46.679932 2134286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:38:46.679968 2134286 certs.go:257] generating profile certs ...
	I1222 01:38:46.680044 2134286 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:38:46.680085 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt with IP's: []
	I1222 01:38:46.841660 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt ...
	I1222 01:38:46.841690 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt: {Name:mk0a220848ae73520e360330c9832985737c86ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.841917 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key ...
	I1222 01:38:46.841933 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key: {Name:mk3fb1e07728181520997d7299a7d80c9afcd3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.842027 2134286 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:38:46.842046 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:38:46.952611 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 ...
	I1222 01:38:46.952645 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856: {Name:mk23d75e59a433395bcccffb2237600ddcfba975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952842 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 ...
	I1222 01:38:46.952855 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856: {Name:mke91b318f6e1a5e82c06575fddda672305e0a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952949 2134286 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt
	I1222 01:38:46.953039 2134286 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key
	I1222 01:38:46.953108 2134286 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:38:46.953127 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt with IP's: []
	I1222 01:38:47.135005 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt ...
	I1222 01:38:47.135039 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt: {Name:mk0bf95a48d358b9b6753ae55278429fc5cfa2f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135258 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key ...
	I1222 01:38:47.135281 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key: {Name:mk2786f86b96c0685cea4c15b093d231aba8608c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135484 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:38:47.135534 2134286 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:47.135550 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:38:47.135580 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:47.135609 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:47.135636 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:47.135686 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:47.137405 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:47.165131 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:47.186245 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:47.206617 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:47.225290 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:47.243359 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:38:47.261575 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:47.279279 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:47.296796 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:38:47.314798 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:38:47.332337 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:47.350465 2134286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:47.363436 2134286 ssh_runner.go:195] Run: openssl version
	I1222 01:38:47.370127 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.377515 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:47.384963 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388630 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388716 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.430642 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:47.438907 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:38:47.446439 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.453815 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:38:47.461441 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465092 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465189 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.506277 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:47.513878 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:38:47.521311 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.528445 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:38:47.535982 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539602 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539691 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.580405 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.587812 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.602092 2134286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:47.606351 2134286 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:38:47.606451 2134286 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:47.606588 2134286 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:47.606681 2134286 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:47.642649 2134286 cri.go:96] found id: ""
	I1222 01:38:47.642790 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:47.652615 2134286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:38:47.661497 2134286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:38:47.661607 2134286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:38:47.669765 2134286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:38:47.669786 2134286 kubeadm.go:158] found existing configuration files:
	
	I1222 01:38:47.669862 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:38:47.678062 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:38:47.678185 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:38:47.685567 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:38:47.693826 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:38:47.693987 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:38:47.702486 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.710049 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:38:47.710137 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.717426 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:38:47.724768 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:38:47.724863 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:38:47.731996 2134286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:38:47.851611 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:38:47.852034 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:38:47.938478 2134286 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.168234 2113859 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000189279s
	I1222 01:41:27.168261 2113859 kubeadm.go:319] 
	I1222 01:41:27.168319 2113859 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:41:27.168358 2113859 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:41:27.168485 2113859 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:41:27.168497 2113859 kubeadm.go:319] 
	I1222 01:41:27.168608 2113859 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:41:27.168644 2113859 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:41:27.168684 2113859 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:41:27.168692 2113859 kubeadm.go:319] 
	I1222 01:41:27.172924 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:41:27.173370 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:41:27.173496 2113859 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.173763 2113859 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:41:27.173774 2113859 kubeadm.go:319] 
	I1222 01:41:27.173843 2113859 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:41:27.173900 2113859 kubeadm.go:403] duration metric: took 8m7.846004041s to StartCluster
	I1222 01:41:27.173948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:27.174010 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:27.198460 2113859 cri.go:96] found id: ""
	I1222 01:41:27.198503 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.198512 2113859 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:27.198519 2113859 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:41:27.198581 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:27.223806 2113859 cri.go:96] found id: ""
	I1222 01:41:27.223832 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.223840 2113859 logs.go:284] No container was found matching "etcd"
	I1222 01:41:27.223847 2113859 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:41:27.223904 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:27.250017 2113859 cri.go:96] found id: ""
	I1222 01:41:27.250042 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.250051 2113859 logs.go:284] No container was found matching "coredns"
	I1222 01:41:27.250068 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:27.250129 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:27.275909 2113859 cri.go:96] found id: ""
	I1222 01:41:27.275934 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.275942 2113859 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:27.275948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:27.276004 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:27.300182 2113859 cri.go:96] found id: ""
	I1222 01:41:27.300215 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.300225 2113859 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:27.300231 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:27.300293 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:27.324772 2113859 cri.go:96] found id: ""
	I1222 01:41:27.324797 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.324806 2113859 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:27.324813 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:27.324868 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:27.353970 2113859 cri.go:96] found id: ""
	I1222 01:41:27.354043 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.354058 2113859 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:27.354068 2113859 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:27.354081 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:27.422083 2113859 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:27.422119 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:27.438918 2113859 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:27.438950 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:27.507814 2113859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:27.507835 2113859 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:41:27.507846 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:41:27.550141 2113859 logs.go:123] Gathering logs for container status ...
	I1222 01:41:27.550178 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:27.578268 2113859 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:41:27.578345 2113859 out.go:285] * 
	W1222 01:41:27.578396 2113859 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.578413 2113859 out.go:285] * 
	W1222 01:41:27.580529 2113859 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:41:27.585764 2113859 out.go:203] 
	W1222 01:41:27.589556 2113859 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.589611 2113859 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:41:27.589635 2113859 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:41:27.592761 2113859 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 01:33:04 no-preload-491054 crio[840]: time="2025-12-22T01:33:04.665075722Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-rc.1 not found" id=dd506993-a528-4423-8950-b0587f89a1dd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:04 no-preload-491054 crio[840]: time="2025-12-22T01:33:04.665126248Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-rc.1 found" id=dd506993-a528-4423-8950-b0587f89a1dd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464435752Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464781145Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464820692Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.637248018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.63753477Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.637589932Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903489965Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903812647Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903864485Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.666376352Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=d373ec24-b32f-4bc7-b1ee-dbc429c84518 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.670731699Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=985533ce-73ee-492d-bec3-e3f7745742a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.672178252Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=4324ee88-2078-4e1d-ac8c-226946777419 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.673730164Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=94416a1e-c2e8-4002-b2df-c67290c0deda name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.675651066Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=26b90d19-1583-4402-8028-995cec1d8c95 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.678449012Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4b69cbcb-1564-462b-975a-a87b64ce4d87 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.680670114Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=7125d5bc-0162-450e-a7bb-265b2a72453d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.297315654Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=7a582268-ead4-4858-9bcb-880317cde6d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.301718861Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=758439ba-2257-42b2-b322-7eee997029e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.303295905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=6d1d7a92-5737-428e-8f96-ffefc25ceae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.305283578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ebe1fb9e-67e5-4ffb-b4bd-137d43ff137b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.306347021Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6bb0a1d2-344b-4b71-8ad1-df4251af2703 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.309548352Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3c577299-eacf-44a2-91b2-dea64d0df8f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.310598748Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c9f46b28-9cc1-401a-9feb-ff7f5b2f29b5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:31.711677    5965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:31.712446    5965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:31.713977    5965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:31.714473    5965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:31.715935    5965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:41:31 up 1 day,  8:24,  0 user,  load average: 0.50, 1.46, 1.98
	Linux no-preload-491054 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:29 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:29 no-preload-491054 kubelet[5759]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:29 no-preload-491054 kubelet[5759]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:29 no-preload-491054 kubelet[5759]: E1222 01:41:29.924051    5759 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:29 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:30 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 22 01:41:30 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:30 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:30 no-preload-491054 kubelet[5856]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:30 no-preload-491054 kubelet[5856]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:30 no-preload-491054 kubelet[5856]: E1222 01:41:30.701667    5856 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:30 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:30 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:41:31 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 22 01:41:31 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:31 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:41:31 no-preload-491054 kubelet[5890]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:31 no-preload-491054 kubelet[5890]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:41:31 no-preload-491054 kubelet[5890]: E1222 01:41:31.404433    5890 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:41:31 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:41:31 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 6 (372.048923ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:41:32.191787 2139589 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (116.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1222 01:42:01.693553 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:01.698882 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:01.709207 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:01.729565 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:01.769898 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:01.850310 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:02.010866 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:02.331527 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:02.972497 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:03.285172 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:04.253445 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:06.813720 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:11.934338 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:22.175189 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:30.967222 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:42.655412 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:42:43.836093 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:43:23.616124 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m54.996852974s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-491054 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-491054 describe deploy/metrics-server -n kube-system: exit status 1 (55.721755ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-491054" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-491054 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-491054
helpers_test.go:244: (dbg) docker inspect no-preload-491054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	        "Created": "2025-12-22T01:32:55.262645623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2114161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:32:55.338008768Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hostname",
	        "HostsPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hosts",
	        "LogPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe-json.log",
	        "Name": "/no-preload-491054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	                "LowerDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-491054",
	                "Source": "/var/lib/docker/volumes/no-preload-491054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491054",
	                "name.minikube.sigs.k8s.io": "no-preload-491054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0028d366ff0985d5277d2d2f90b7ddf8e366e82e83e7670b234adfbab8a808ab",
	            "SandboxKey": "/var/run/docker/netns/0028d366ff09",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38751"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38752"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38755"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38753"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38754"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:04:e9:7c:da:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19fc401d8662d0cf8941c7bbec5156d3018f76638db0a4459b76764a03803463",
	                    "EndpointID": "31fb2fe9d31e4ac100d348896a6811cbe2d47df53862df096768f216d2449c34",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491054",
	                        "313e04d6776e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054: exit status 6 (291.950642ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:43:27.559481 2141646 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491054 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ image   │ old-k8s-version-535197 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ pause   │ -p old-k8s-version-535197 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │                     │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ delete  │ -p old-k8s-version-535197                                                                                                                                                                                                                          │ old-k8s-version-535197       │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:33 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:38:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:38:34.723707 2134286 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:34.723823 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.723833 2134286 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:34.723839 2134286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:34.724123 2134286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:38:34.724545 2134286 out.go:368] Setting JSON to false
	I1222 01:38:34.725446 2134286 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116464,"bootTime":1766251051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:38:34.725511 2134286 start.go:143] virtualization:  
	I1222 01:38:34.729663 2134286 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:34.733886 2134286 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:34.734036 2134286 notify.go:221] Checking for updates...
	I1222 01:38:34.740441 2134286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:34.743585 2134286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:38:34.746751 2134286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:38:34.749869 2134286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:34.752958 2134286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:34.756493 2134286 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:34.756622 2134286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:34.783091 2134286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:34.783353 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.839782 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.830338261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.839888 2134286 docker.go:319] overlay module found
	I1222 01:38:34.845569 2134286 out.go:179] * Using the docker driver based on user configuration
	I1222 01:38:34.848592 2134286 start.go:309] selected driver: docker
	I1222 01:38:34.848611 2134286 start.go:928] validating driver "docker" against <nil>
	I1222 01:38:34.848634 2134286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:34.849297 2134286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:34.935540 2134286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:34.926549754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:34.935703 2134286 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:38:34.935733 2134286 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:38:34.935959 2134286 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:34.938976 2134286 out.go:179] * Using Docker driver with root privileges
	I1222 01:38:34.941882 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:34.941953 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:34.941972 2134286 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:38:34.942050 2134286 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:34.945127 2134286 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:38:34.947951 2134286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:38:34.950705 2134286 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:34.953516 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:34.953561 2134286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:38:34.953574 2134286 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:34.953598 2134286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:34.953659 2134286 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:34.953669 2134286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:38:34.953790 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:34.953808 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json: {Name:mkbd82291dca772e741ca6a4965901d8562f02f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:34.972608 2134286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:34.972632 2134286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:34.972653 2134286 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:34.972687 2134286 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:34.972789 2134286 start.go:364] duration metric: took 81.072µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:38:34.972826 2134286 start.go:93] Provisioning new machine with config: &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:38:34.972897 2134286 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:38:34.976294 2134286 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:38:34.976517 2134286 start.go:159] libmachine.API.Create for "newest-cni-704217" (driver="docker")
	I1222 01:38:34.976560 2134286 client.go:173] LocalClient.Create starting
	I1222 01:38:34.976631 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:38:34.976666 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976687 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.976752 2134286 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:38:34.976774 2134286 main.go:144] libmachine: Decoding PEM data...
	I1222 01:38:34.976789 2134286 main.go:144] libmachine: Parsing certificate...
	I1222 01:38:34.977153 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:38:34.991933 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:38:34.992025 2134286 network_create.go:284] running [docker network inspect newest-cni-704217] to gather additional debugging logs...
	I1222 01:38:34.992051 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217
	W1222 01:38:35.009843 2134286 cli_runner.go:211] docker network inspect newest-cni-704217 returned with exit code 1
	I1222 01:38:35.009879 2134286 network_create.go:287] error running [docker network inspect newest-cni-704217]: docker network inspect newest-cni-704217: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-704217 not found
	I1222 01:38:35.009894 2134286 network_create.go:289] output of [docker network inspect newest-cni-704217]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-704217 not found
	
	** /stderr **
	I1222 01:38:35.009998 2134286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:35.028311 2134286 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:38:35.028684 2134286 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:38:35.029053 2134286 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:38:35.029412 2134286 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 01:38:35.029864 2134286 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5ab0}
	I1222 01:38:35.029890 2134286 network_create.go:124] attempt to create docker network newest-cni-704217 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:38:35.029949 2134286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-704217 newest-cni-704217
	I1222 01:38:35.091875 2134286 network_create.go:108] docker network newest-cni-704217 192.168.85.0/24 created
	I1222 01:38:35.091908 2134286 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-704217" container
	I1222 01:38:35.091998 2134286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:38:35.109188 2134286 cli_runner.go:164] Run: docker volume create newest-cni-704217 --label name.minikube.sigs.k8s.io=newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:38:35.127123 2134286 oci.go:103] Successfully created a docker volume newest-cni-704217
	I1222 01:38:35.127220 2134286 cli_runner.go:164] Run: docker run --rm --name newest-cni-704217-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --entrypoint /usr/bin/test -v newest-cni-704217:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:38:35.650646 2134286 oci.go:107] Successfully prepared a docker volume newest-cni-704217
	I1222 01:38:35.650718 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:35.650728 2134286 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:38:35.650802 2134286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:38:39.466788 2134286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-704217:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.81593333s)
	I1222 01:38:39.466823 2134286 kic.go:203] duration metric: took 3.816091086s to extract preloaded images to volume ...
	W1222 01:38:39.466970 2134286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:38:39.467078 2134286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:38:39.518928 2134286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-704217 --name newest-cni-704217 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-704217 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-704217 --network newest-cni-704217 --ip 192.168.85.2 --volume newest-cni-704217:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:38:39.824489 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Running}}
	I1222 01:38:39.845273 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:39.868132 2134286 cli_runner.go:164] Run: docker exec newest-cni-704217 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:38:39.918857 2134286 oci.go:144] the created container "newest-cni-704217" has a running status.
	I1222 01:38:39.918885 2134286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa...
	I1222 01:38:40.057379 2134286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:38:40.081663 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.113212 2134286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:38:40.113247 2134286 kic_runner.go:114] Args: [docker exec --privileged newest-cni-704217 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:38:40.193617 2134286 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:38:40.220327 2134286 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:40.220433 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:40.250962 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:40.251306 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:40.251316 2134286 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:40.251882 2134286 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51396->127.0.0.1:38776: read: connection reset by peer
	I1222 01:38:43.385195 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.385223 2134286 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:38:43.385289 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.406922 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.407255 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.407271 2134286 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:38:43.550918 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:38:43.550996 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.569002 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:43.569357 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:43.569382 2134286 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:43.701601 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:43.701676 2134286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:38:43.701715 2134286 ubuntu.go:190] setting up certificates
	I1222 01:38:43.701726 2134286 provision.go:84] configureAuth start
	I1222 01:38:43.701795 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:43.718903 2134286 provision.go:143] copyHostCerts
	I1222 01:38:43.718967 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:38:43.718976 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:38:43.719054 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:38:43.719156 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:38:43.719162 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:38:43.719188 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:38:43.719246 2134286 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:38:43.719251 2134286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:38:43.719274 2134286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:38:43.719328 2134286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:38:43.977508 2134286 provision.go:177] copyRemoteCerts
	I1222 01:38:43.977583 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:43.977627 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:43.995727 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.093656 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:44.115417 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:44.140528 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:38:44.162225 2134286 provision.go:87] duration metric: took 460.475764ms to configureAuth
	I1222 01:38:44.162253 2134286 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:44.162450 2134286 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:44.162564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.180792 2134286 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:44.181103 2134286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38776 <nil> <nil>}
	I1222 01:38:44.181121 2134286 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:38:44.469915 2134286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:38:44.469949 2134286 machine.go:97] duration metric: took 4.249594356s to provisionDockerMachine
	I1222 01:38:44.469959 2134286 client.go:176] duration metric: took 9.493388716s to LocalClient.Create
	I1222 01:38:44.469974 2134286 start.go:167] duration metric: took 9.493457957s to libmachine.API.Create "newest-cni-704217"
	I1222 01:38:44.469987 2134286 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:38:44.469998 2134286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:44.470072 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:44.470110 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.487566 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.585312 2134286 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:44.588561 2134286 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:44.588588 2134286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:44.588600 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:38:44.588655 2134286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:38:44.588735 2134286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:38:44.588834 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:44.596094 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:44.613273 2134286 start.go:296] duration metric: took 143.27184ms for postStartSetup
	I1222 01:38:44.613679 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.630261 2134286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:38:44.630541 2134286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:44.630589 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.647052 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.738630 2134286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:44.743420 2134286 start.go:128] duration metric: took 9.770509222s to createHost
	I1222 01:38:44.743497 2134286 start.go:83] releasing machines lock for "newest-cni-704217", held for 9.770692552s
	I1222 01:38:44.743588 2134286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:38:44.760151 2134286 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:44.760209 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.760507 2134286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:44.760564 2134286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:38:44.782041 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.788656 2134286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38776 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:38:44.973622 2134286 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:44.980619 2134286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:38:45.057102 2134286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:45.064439 2134286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:45.064557 2134286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:45.103917 2134286 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:38:45.103956 2134286 start.go:496] detecting cgroup driver to use...
	I1222 01:38:45.103997 2134286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:45.104069 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:38:45.126644 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:38:45.158107 2134286 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:45.158598 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:45.189553 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:45.237377 2134286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:45.372783 2134286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:45.495736 2134286 docker.go:234] disabling docker service ...
	I1222 01:38:45.495825 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:45.520536 2134286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:45.534342 2134286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:45.667915 2134286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:45.795197 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:45.808465 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:45.822173 2134286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:38:45.822242 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.830599 2134286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:38:45.830673 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.839239 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.847549 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.856111 2134286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:45.863959 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.872612 2134286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.886086 2134286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:38:45.895578 2134286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:45.903269 2134286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:45.910730 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.026286 2134286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:38:46.193052 2134286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:38:46.193121 2134286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:38:46.197063 2134286 start.go:564] Will wait 60s for crictl version
	I1222 01:38:46.197156 2134286 ssh_runner.go:195] Run: which crictl
	I1222 01:38:46.200956 2134286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:46.225282 2134286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:38:46.225459 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.257224 2134286 ssh_runner.go:195] Run: crio --version
	I1222 01:38:46.288579 2134286 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:38:46.291328 2134286 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:46.307792 2134286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:46.311726 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.324397 2134286 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:38:46.327309 2134286 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:46.327454 2134286 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:38:46.327539 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.376288 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.376313 2134286 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:38:46.376388 2134286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:46.406533 2134286 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:38:46.406556 2134286 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:46.406564 2134286 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:38:46.406653 2134286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:46.406737 2134286 ssh_runner.go:195] Run: crio config
	I1222 01:38:46.477901 2134286 cni.go:84] Creating CNI manager for ""
	I1222 01:38:46.477922 2134286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:38:46.477963 2134286 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:46.477999 2134286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:46.478139 2134286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:46.478220 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:46.486049 2134286 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:46.486137 2134286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:46.493605 2134286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:38:46.506655 2134286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:46.520439 2134286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:38:46.533218 2134286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:46.537007 2134286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:46.546792 2134286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:46.662666 2134286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:46.679594 2134286 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:38:46.679659 2134286 certs.go:195] generating shared ca certs ...
	I1222 01:38:46.679688 2134286 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.679859 2134286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:38:46.679932 2134286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:38:46.679968 2134286 certs.go:257] generating profile certs ...
	I1222 01:38:46.680044 2134286 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:38:46.680085 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt with IP's: []
	I1222 01:38:46.841660 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt ...
	I1222 01:38:46.841690 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.crt: {Name:mk0a220848ae73520e360330c9832985737c86ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.841917 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key ...
	I1222 01:38:46.841933 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key: {Name:mk3fb1e07728181520997d7299a7d80c9afcd3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.842027 2134286 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:38:46.842046 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:38:46.952611 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 ...
	I1222 01:38:46.952645 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856: {Name:mk23d75e59a433395bcccffb2237600ddcfba975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952842 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 ...
	I1222 01:38:46.952855 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856: {Name:mke91b318f6e1a5e82c06575fddda672305e0a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:46.952949 2134286 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt
	I1222 01:38:46.953039 2134286 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key
	I1222 01:38:46.953108 2134286 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:38:46.953127 2134286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt with IP's: []
	I1222 01:38:47.135005 2134286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt ...
	I1222 01:38:47.135039 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt: {Name:mk0bf95a48d358b9b6753ae55278429fc5cfa2f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135258 2134286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key ...
	I1222 01:38:47.135281 2134286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key: {Name:mk2786f86b96c0685cea4c15b093d231aba8608c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:47.135484 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:38:47.135534 2134286 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:47.135550 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:38:47.135580 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:47.135609 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:47.135636 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:47.135686 2134286 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:38:47.137405 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:47.165131 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:47.186245 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:47.206617 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:47.225290 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:47.243359 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:38:47.261575 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:47.279279 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:47.296796 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:38:47.314798 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:38:47.332337 2134286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:47.350465 2134286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:47.363436 2134286 ssh_runner.go:195] Run: openssl version
	I1222 01:38:47.370127 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.377515 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:47.384963 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388630 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.388716 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:47.430642 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:47.438907 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:38:47.446439 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.453815 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:38:47.461441 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465092 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.465189 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:38:47.506277 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:47.513878 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:38:47.521311 2134286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.528445 2134286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:38:47.535982 2134286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539602 2134286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.539691 2134286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:38:47.580405 2134286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.587812 2134286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:47.602092 2134286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:47.606351 2134286 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:38:47.606451 2134286 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:47.606588 2134286 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:47.606681 2134286 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:47.642649 2134286 cri.go:96] found id: ""
	I1222 01:38:47.642790 2134286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:47.652615 2134286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:38:47.661497 2134286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:38:47.661607 2134286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:38:47.669765 2134286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:38:47.669786 2134286 kubeadm.go:158] found existing configuration files:
	
	I1222 01:38:47.669862 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:38:47.678062 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:38:47.678185 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:38:47.685567 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:38:47.693826 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:38:47.693987 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:38:47.702486 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.710049 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:38:47.710137 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:38:47.717426 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:38:47.724768 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:38:47.724863 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:38:47.731996 2134286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:38:47.851611 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:38:47.852034 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:38:47.938478 2134286 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.168234 2113859 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000189279s
	I1222 01:41:27.168261 2113859 kubeadm.go:319] 
	I1222 01:41:27.168319 2113859 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:41:27.168358 2113859 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:41:27.168485 2113859 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:41:27.168497 2113859 kubeadm.go:319] 
	I1222 01:41:27.168608 2113859 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:41:27.168644 2113859 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:41:27.168684 2113859 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:41:27.168692 2113859 kubeadm.go:319] 
	I1222 01:41:27.172924 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:41:27.173370 2113859 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:41:27.173496 2113859 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:41:27.173763 2113859 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:41:27.173774 2113859 kubeadm.go:319] 
	I1222 01:41:27.173843 2113859 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:41:27.173900 2113859 kubeadm.go:403] duration metric: took 8m7.846004041s to StartCluster
	I1222 01:41:27.173948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:27.174010 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:27.198460 2113859 cri.go:96] found id: ""
	I1222 01:41:27.198503 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.198512 2113859 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:27.198519 2113859 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:41:27.198581 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:27.223806 2113859 cri.go:96] found id: ""
	I1222 01:41:27.223832 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.223840 2113859 logs.go:284] No container was found matching "etcd"
	I1222 01:41:27.223847 2113859 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:41:27.223904 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:27.250017 2113859 cri.go:96] found id: ""
	I1222 01:41:27.250042 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.250051 2113859 logs.go:284] No container was found matching "coredns"
	I1222 01:41:27.250068 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:27.250129 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:27.275909 2113859 cri.go:96] found id: ""
	I1222 01:41:27.275934 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.275942 2113859 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:27.275948 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:27.276004 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:27.300182 2113859 cri.go:96] found id: ""
	I1222 01:41:27.300215 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.300225 2113859 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:27.300231 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:27.300293 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:27.324772 2113859 cri.go:96] found id: ""
	I1222 01:41:27.324797 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.324806 2113859 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:27.324813 2113859 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:27.324868 2113859 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:27.353970 2113859 cri.go:96] found id: ""
	I1222 01:41:27.354043 2113859 logs.go:282] 0 containers: []
	W1222 01:41:27.354058 2113859 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:27.354068 2113859 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:27.354081 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:27.422083 2113859 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:27.422119 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:27.438918 2113859 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:27.438950 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:27.507814 2113859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:27.497051    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.497631    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499152    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.499599    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:27.504110    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:27.507835 2113859 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:41:27.507846 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:41:27.550141 2113859 logs.go:123] Gathering logs for container status ...
	I1222 01:41:27.550178 2113859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:27.578268 2113859 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:41:27.578345 2113859 out.go:285] * 
	W1222 01:41:27.578396 2113859 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.578413 2113859 out.go:285] * 
	W1222 01:41:27.580529 2113859 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:41:27.585764 2113859 out.go:203] 
	W1222 01:41:27.589556 2113859 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000189279s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:41:27.589611 2113859 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:41:27.589635 2113859 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:41:27.592761 2113859 out.go:203] 
	I1222 01:42:51.689259 2134286 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:42:51.689305 2134286 kubeadm.go:319] 
	I1222 01:42:51.689457 2134286 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:42:51.693607 2134286 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:42:51.693680 2134286 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:42:51.693786 2134286 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:42:51.693854 2134286 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:42:51.693896 2134286 kubeadm.go:319] OS: Linux
	I1222 01:42:51.693951 2134286 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:42:51.694009 2134286 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:42:51.694066 2134286 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:42:51.694122 2134286 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:42:51.694174 2134286 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:42:51.694237 2134286 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:42:51.694288 2134286 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:42:51.694342 2134286 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:42:51.694392 2134286 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:42:51.694474 2134286 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:42:51.694572 2134286 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:42:51.694670 2134286 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:42:51.694735 2134286 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:42:51.697624 2134286 out.go:252]   - Generating certificates and keys ...
	I1222 01:42:51.697718 2134286 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:42:51.697808 2134286 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:42:51.697915 2134286 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:42:51.697987 2134286 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:42:51.698058 2134286 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:42:51.698120 2134286 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:42:51.698181 2134286 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:42:51.698337 2134286 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:42:51.698415 2134286 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:42:51.698545 2134286 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:42:51.698615 2134286 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:42:51.698683 2134286 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:42:51.698736 2134286 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:42:51.698795 2134286 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:42:51.698852 2134286 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:42:51.698914 2134286 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:42:51.698974 2134286 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:42:51.699044 2134286 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:42:51.699102 2134286 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:42:51.699191 2134286 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:42:51.699266 2134286 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:42:51.702198 2134286 out.go:252]   - Booting up control plane ...
	I1222 01:42:51.702305 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:42:51.702393 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:42:51.702465 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:42:51.702605 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:42:51.702722 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:42:51.702859 2134286 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:42:51.702962 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:42:51.703031 2134286 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:42:51.703176 2134286 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:42:51.703289 2134286 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:42:51.703359 2134286 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001091621s
	I1222 01:42:51.703367 2134286 kubeadm.go:319] 
	I1222 01:42:51.703424 2134286 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:42:51.703471 2134286 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:42:51.703600 2134286 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:42:51.703609 2134286 kubeadm.go:319] 
	I1222 01:42:51.703741 2134286 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:42:51.703788 2134286 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:42:51.703827 2134286 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:42:51.703884 2134286 kubeadm.go:319] 
	W1222 01:42:51.703963 2134286 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-704217] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001091621s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:42:51.704042 2134286 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1222 01:42:52.116333 2134286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:42:52.130279 2134286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:42:52.130395 2134286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:42:52.138956 2134286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:42:52.138973 2134286 kubeadm.go:158] found existing configuration files:
	
	I1222 01:42:52.139033 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:42:52.147315 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:42:52.147440 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:42:52.155563 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:42:52.163564 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:42:52.163636 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:42:52.170913 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:42:52.178471 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:42:52.178533 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:42:52.185865 2134286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:42:52.193518 2134286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:42:52.193583 2134286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:42:52.201481 2134286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:42:52.237447 2134286 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:42:52.237777 2134286 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:42:52.304606 2134286 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:42:52.304761 2134286 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:42:52.304822 2134286 kubeadm.go:319] OS: Linux
	I1222 01:42:52.304898 2134286 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:42:52.305020 2134286 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:42:52.305102 2134286 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:42:52.305188 2134286 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:42:52.305283 2134286 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:42:52.305392 2134286 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:42:52.305467 2134286 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:42:52.305543 2134286 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:42:52.305618 2134286 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:42:52.385627 2134286 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:42:52.385746 2134286 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:42:52.385870 2134286 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:42:52.397770 2134286 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:42:52.403059 2134286 out.go:252]   - Generating certificates and keys ...
	I1222 01:42:52.403222 2134286 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:42:52.403321 2134286 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:42:52.403441 2134286 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:42:52.403535 2134286 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:42:52.403644 2134286 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:42:52.403731 2134286 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:42:52.403829 2134286 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:42:52.403931 2134286 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:42:52.404039 2134286 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:42:52.404145 2134286 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:42:52.404221 2134286 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:42:52.404305 2134286 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:42:52.526140 2134286 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:42:52.663215 2134286 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:42:53.123947 2134286 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:42:53.256891 2134286 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:42:53.565548 2134286 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:42:53.566081 2134286 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:42:53.568608 2134286 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:42:53.571917 2134286 out.go:252]   - Booting up control plane ...
	I1222 01:42:53.572079 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:42:53.572209 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:42:53.572309 2134286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:42:53.586354 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:42:53.586471 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:42:53.594381 2134286 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:42:53.595983 2134286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:42:53.596247 2134286 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:42:53.724884 2134286 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:42:53.725013 2134286 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Dec 22 01:33:04 no-preload-491054 crio[840]: time="2025-12-22T01:33:04.665075722Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-rc.1 not found" id=dd506993-a528-4423-8950-b0587f89a1dd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:04 no-preload-491054 crio[840]: time="2025-12-22T01:33:04.665126248Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-rc.1 found" id=dd506993-a528-4423-8950-b0587f89a1dd name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464435752Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464781145Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:05 no-preload-491054 crio[840]: time="2025-12-22T01:33:05.464820692Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=8db94b91-c287-444c-a96d-81d2de63186f name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.637248018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.63753477Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:07 no-preload-491054 crio[840]: time="2025-12-22T01:33:07.637589932Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=44881718-b72c-4c22-bcc5-5213e9765b06 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903489965Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903812647Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:08 no-preload-491054 crio[840]: time="2025-12-22T01:33:08.903864485Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=12099b62-7092-486b-b4ad-781e051fcef2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.666376352Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=d373ec24-b32f-4bc7-b1ee-dbc429c84518 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.670731699Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=985533ce-73ee-492d-bec3-e3f7745742a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.672178252Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=4324ee88-2078-4e1d-ac8c-226946777419 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.673730164Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=94416a1e-c2e8-4002-b2df-c67290c0deda name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.675651066Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=26b90d19-1583-4402-8028-995cec1d8c95 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.678449012Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4b69cbcb-1564-462b-975a-a87b64ce4d87 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:33:19 no-preload-491054 crio[840]: time="2025-12-22T01:33:19.680670114Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=7125d5bc-0162-450e-a7bb-265b2a72453d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.297315654Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=7a582268-ead4-4858-9bcb-880317cde6d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.301718861Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=758439ba-2257-42b2-b322-7eee997029e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.303295905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=6d1d7a92-5737-428e-8f96-ffefc25ceae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.305283578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ebe1fb9e-67e5-4ffb-b4bd-137d43ff137b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.306347021Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6bb0a1d2-344b-4b71-8ad1-df4251af2703 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.309548352Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3c577299-eacf-44a2-91b2-dea64d0df8f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:37:25 no-preload-491054 crio[840]: time="2025-12-22T01:37:25.310598748Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c9f46b28-9cc1-401a-9feb-ff7f5b2f29b5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:28.208880    7098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.209868    7098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.210782    7098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.212406    7098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.212969    7098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:43:28 up 1 day,  8:25,  0 user,  load average: 0.41, 1.19, 1.83
	Linux no-preload-491054 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:43:25 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:43:26 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 806.
	Dec 22 01:43:26 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:43:26 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:43:26 no-preload-491054 kubelet[6981]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:43:26 no-preload-491054 kubelet[6981]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:43:26 no-preload-491054 kubelet[6981]: E1222 01:43:26.157229    6981 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:43:26 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:43:26 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:43:26 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 807.
	Dec 22 01:43:26 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:43:26 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:43:26 no-preload-491054 kubelet[6987]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:43:26 no-preload-491054 kubelet[6987]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:43:26 no-preload-491054 kubelet[6987]: E1222 01:43:26.897932    6987 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:43:26 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:43:26 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:43:27 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 22 01:43:27 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:43:27 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:43:27 no-preload-491054 kubelet[7013]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:43:27 no-preload-491054 kubelet[7013]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:43:27 no-preload-491054 kubelet[7013]: E1222 01:43:27.709224    7013 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:43:27 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:43:27 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 6 (354.197091ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:43:28.677625 2141864 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (116.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1222 01:44:45.536480 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:45:46.889465 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:45:47.971275 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:46:25.758869 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 80 (6m8.414668987s)

                                                
                                                
-- stdout --
	* [no-preload-491054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-491054" primary control-plane node in "no-preload-491054" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:43:30.250445 2142162 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:43:30.250555 2142162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:43:30.250568 2142162 out.go:374] Setting ErrFile to fd 2...
	I1222 01:43:30.250578 2142162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:43:30.250816 2142162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:43:30.251169 2142162 out.go:368] Setting JSON to false
	I1222 01:43:30.252022 2142162 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116760,"bootTime":1766251051,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:43:30.252085 2142162 start.go:143] virtualization:  
	I1222 01:43:30.256993 2142162 out.go:179] * [no-preload-491054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:43:30.260139 2142162 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:43:30.260239 2142162 notify.go:221] Checking for updates...
	I1222 01:43:30.265992 2142162 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:43:30.268823 2142162 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:30.271754 2142162 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:43:30.274654 2142162 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:43:30.277521 2142162 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:43:30.280920 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:30.281572 2142162 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:43:30.304144 2142162 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:43:30.304273 2142162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:43:30.380279 2142162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:43:30.370426413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:43:30.380380 2142162 docker.go:319] overlay module found
	I1222 01:43:30.383492 2142162 out.go:179] * Using the docker driver based on existing profile
	I1222 01:43:30.386252 2142162 start.go:309] selected driver: docker
	I1222 01:43:30.386270 2142162 start.go:928] validating driver "docker" against &{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:30.386379 2142162 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:43:30.387093 2142162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:43:30.443809 2142162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:43:30.434314894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:43:30.444139 2142162 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:43:30.444167 2142162 cni.go:84] Creating CNI manager for ""
	I1222 01:43:30.444230 2142162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:43:30.444275 2142162 start.go:353] cluster config:
	{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:30.449118 2142162 out.go:179] * Starting "no-preload-491054" primary control-plane node in "no-preload-491054" cluster
	I1222 01:43:30.451889 2142162 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:43:30.454738 2142162 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:43:30.457645 2142162 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:43:30.457719 2142162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:43:30.457790 2142162 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:43:30.458097 2142162 cache.go:107] acquiring lock: {Name:mkce8c58631c5f0d23bf68d20d333345f25cc3e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458179 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:43:30.458197 2142162 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.314µs
	I1222 01:43:30.458210 2142162 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:43:30.458227 2142162 cache.go:107] acquiring lock: {Name:mk4201be15d3fc5460d5a3841ee13ef2c95ea347 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458267 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:43:30.458277 2142162 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 52.495µs
	I1222 01:43:30.458284 2142162 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458304 2142162 cache.go:107] acquiring lock: {Name:mkf2e3bf6867cb15e485fef5b9e233e7559c99a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458336 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:43:30.458345 2142162 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 43.535µs
	I1222 01:43:30.458353 2142162 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458369 2142162 cache.go:107] acquiring lock: {Name:mkb8446b3e0ab7883b46b478905fd798f55f01e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458400 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:43:30.458409 2142162 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 41.172µs
	I1222 01:43:30.458449 2142162 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458470 2142162 cache.go:107] acquiring lock: {Name:mk77482801a8c57f7731abc5d9887216f12ba8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458530 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:43:30.458546 2142162 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 77.315µs
	I1222 01:43:30.458553 2142162 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458608 2142162 cache.go:107] acquiring lock: {Name:mk692f79fde95c3755a4408e63ab002578541c74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458687 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:43:30.458702 2142162 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 113.81µs
	I1222 01:43:30.458726 2142162 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:43:30.458740 2142162 cache.go:107] acquiring lock: {Name:mkd233ea519aa3a2daed97ba862dcfb34d3be39d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458851 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:43:30.458865 2142162 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 126.815µs
	I1222 01:43:30.458872 2142162 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:43:30.458914 2142162 cache.go:107] acquiring lock: {Name:mk309b91ac5e9b8dc5721727680b39f391c5db3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458983 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:43:30.458997 2142162 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 92.724µs
	I1222 01:43:30.459005 2142162 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:43:30.459012 2142162 cache.go:87] Successfully saved all images to host disk.
	I1222 01:43:30.477002 2142162 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:43:30.477024 2142162 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:43:30.477043 2142162 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:43:30.477072 2142162 start.go:360] acquireMachinesLock for no-preload-491054: {Name:mk963c2c890d85a550b13090cd7467d59e3caf6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.477129 2142162 start.go:364] duration metric: took 36.634µs to acquireMachinesLock for "no-preload-491054"
	I1222 01:43:30.477153 2142162 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:43:30.477163 2142162 fix.go:54] fixHost starting: 
	I1222 01:43:30.477492 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:30.493752 2142162 fix.go:112] recreateIfNeeded on no-preload-491054: state=Stopped err=<nil>
	W1222 01:43:30.493785 2142162 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:43:30.496979 2142162 out.go:252] * Restarting existing docker container for "no-preload-491054" ...
	I1222 01:43:30.497063 2142162 cli_runner.go:164] Run: docker start no-preload-491054
	I1222 01:43:30.758339 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:30.783924 2142162 kic.go:430] container "no-preload-491054" state is running.
	I1222 01:43:30.784895 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:30.806120 2142162 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:43:30.806345 2142162 machine.go:94] provisionDockerMachine start ...
	I1222 01:43:30.806411 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:30.836679 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:30.837006 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:30.837015 2142162 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:43:30.839411 2142162 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:43:33.973258 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:43:33.973282 2142162 ubuntu.go:182] provisioning hostname "no-preload-491054"
	I1222 01:43:33.973370 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:33.991448 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:33.991768 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:33.991785 2142162 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-491054 && echo "no-preload-491054" | sudo tee /etc/hostname
	I1222 01:43:34.134668 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:43:34.134773 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.153303 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:34.153657 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:34.153681 2142162 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491054/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:43:34.285755 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:43:34.285786 2142162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:43:34.285816 2142162 ubuntu.go:190] setting up certificates
	I1222 01:43:34.285832 2142162 provision.go:84] configureAuth start
	I1222 01:43:34.285896 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:34.304161 2142162 provision.go:143] copyHostCerts
	I1222 01:43:34.304238 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:43:34.304271 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:43:34.304354 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:43:34.304468 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:43:34.304480 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:43:34.304508 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:43:34.304576 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:43:34.304585 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:43:34.304610 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:43:34.304663 2142162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.no-preload-491054 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-491054]
	I1222 01:43:34.369088 2142162 provision.go:177] copyRemoteCerts
	I1222 01:43:34.369157 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:43:34.369200 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.387117 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:34.489125 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:43:34.507509 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:43:34.526028 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:43:34.544740 2142162 provision.go:87] duration metric: took 258.882924ms to configureAuth
	I1222 01:43:34.544808 2142162 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:43:34.545018 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:34.545137 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.562445 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:34.562759 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:34.562773 2142162 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:43:34.840774 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:43:34.840838 2142162 machine.go:97] duration metric: took 4.034478045s to provisionDockerMachine
	I1222 01:43:34.840864 2142162 start.go:293] postStartSetup for "no-preload-491054" (driver="docker")
	I1222 01:43:34.840891 2142162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:43:34.841014 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:43:34.841131 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.858548 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:34.957415 2142162 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:43:34.960727 2142162 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:43:34.960754 2142162 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:43:34.960765 2142162 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:43:34.960821 2142162 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:43:34.960908 2142162 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:43:34.961020 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:43:34.968556 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:43:34.986959 2142162 start.go:296] duration metric: took 146.06498ms for postStartSetup
	I1222 01:43:34.987050 2142162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:43:34.987089 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.005930 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.110222 2142162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:43:35.118762 2142162 fix.go:56] duration metric: took 4.641592572s for fixHost
	I1222 01:43:35.118836 2142162 start.go:83] releasing machines lock for "no-preload-491054", held for 4.641692655s
	I1222 01:43:35.118944 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:35.139078 2142162 ssh_runner.go:195] Run: cat /version.json
	I1222 01:43:35.139134 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.139394 2142162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:43:35.139451 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.169405 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.178990 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.269046 2142162 ssh_runner.go:195] Run: systemctl --version
	I1222 01:43:35.364120 2142162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:43:35.400545 2142162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:43:35.405009 2142162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:43:35.405089 2142162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:43:35.413041 2142162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:43:35.413064 2142162 start.go:496] detecting cgroup driver to use...
	I1222 01:43:35.413113 2142162 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:43:35.413189 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:43:35.428420 2142162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:43:35.441740 2142162 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:43:35.441807 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:43:35.456961 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:43:35.470063 2142162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:43:35.582136 2142162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:43:35.708479 2142162 docker.go:234] disabling docker service ...
	I1222 01:43:35.708551 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:43:35.723755 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:43:35.736977 2142162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:43:35.868719 2142162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:43:35.995326 2142162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:43:36.015703 2142162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:43:36.031200 2142162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:43:36.031291 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.040788 2142162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:43:36.040868 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.050288 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.059682 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.069142 2142162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:43:36.078174 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.088084 2142162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.097092 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.106783 2142162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:43:36.114759 2142162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:43:36.122164 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:36.245950 2142162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:43:36.431807 2142162 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:43:36.431892 2142162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:43:36.435793 2142162 start.go:564] Will wait 60s for crictl version
	I1222 01:43:36.435862 2142162 ssh_runner.go:195] Run: which crictl
	I1222 01:43:36.439341 2142162 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:43:36.467888 2142162 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:43:36.467972 2142162 ssh_runner.go:195] Run: crio --version
	I1222 01:43:36.498281 2142162 ssh_runner.go:195] Run: crio --version
	I1222 01:43:36.532998 2142162 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:43:36.535843 2142162 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:43:36.552071 2142162 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:43:36.555936 2142162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:43:36.565145 2142162 kubeadm.go:884] updating cluster {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:43:36.565259 2142162 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:43:36.565301 2142162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:43:36.609695 2142162 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:43:36.609715 2142162 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:43:36.609722 2142162 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:43:36.609826 2142162 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:43:36.609901 2142162 ssh_runner.go:195] Run: crio config
	I1222 01:43:36.685397 2142162 cni.go:84] Creating CNI manager for ""
	I1222 01:43:36.685422 2142162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:43:36.685444 2142162 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:43:36.685471 2142162 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491054 NodeName:no-preload-491054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:43:36.685595 2142162 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491054"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:43:36.685674 2142162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:43:36.693362 2142162 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:43:36.693431 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:43:36.700893 2142162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:43:36.713423 2142162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:43:36.726038 2142162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 01:43:36.738412 2142162 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:43:36.741844 2142162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:43:36.751009 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:36.872344 2142162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:43:36.895746 2142162 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054 for IP: 192.168.76.2
	I1222 01:43:36.895766 2142162 certs.go:195] generating shared ca certs ...
	I1222 01:43:36.895796 2142162 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:36.895947 2142162 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:43:36.895998 2142162 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:43:36.896010 2142162 certs.go:257] generating profile certs ...
	I1222 01:43:36.896128 2142162 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key
	I1222 01:43:36.896216 2142162 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a
	I1222 01:43:36.896270 2142162 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key
	I1222 01:43:36.896403 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:43:36.896439 2142162 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:43:36.896451 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:43:36.896492 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:43:36.896520 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:43:36.896557 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:43:36.896612 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:43:36.901156 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:43:36.921532 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:43:36.938876 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:43:36.955891 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:43:36.973839 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:43:36.990503 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:43:37.012123 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:43:37.035419 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:43:37.054551 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:43:37.071987 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:43:37.089400 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:43:37.106223 2142162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:43:37.118959 2142162 ssh_runner.go:195] Run: openssl version
	I1222 01:43:37.125211 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.132736 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:43:37.140145 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.143651 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.143715 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.184794 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:43:37.192255 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.199933 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:43:37.207517 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.211242 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.211312 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.252148 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:43:37.259423 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.266494 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:43:37.274006 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.277730 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.277799 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.318687 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:43:37.326275 2142162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:43:37.330267 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:43:37.371711 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:43:37.412636 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:43:37.455494 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:43:37.496495 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:43:37.537537 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:43:37.578877 2142162 kubeadm.go:401] StartCluster: {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:37.578975 2142162 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:43:37.579049 2142162 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:43:37.606764 2142162 cri.go:96] found id: ""
	I1222 01:43:37.606851 2142162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:43:37.614979 2142162 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:43:37.615050 2142162 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:43:37.615145 2142162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:43:37.622476 2142162 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:43:37.622933 2142162 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:37.623103 2142162 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-491054" cluster setting kubeconfig missing "no-preload-491054" context setting]
	I1222 01:43:37.623423 2142162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.624922 2142162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:43:37.632919 2142162 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:43:37.632949 2142162 kubeadm.go:602] duration metric: took 17.880466ms to restartPrimaryControlPlane
	I1222 01:43:37.632959 2142162 kubeadm.go:403] duration metric: took 54.094028ms to StartCluster
	I1222 01:43:37.632974 2142162 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.633031 2142162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:37.633700 2142162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.633896 2142162 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:43:37.634282 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:37.634245 2142162 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:43:37.634339 2142162 addons.go:70] Setting storage-provisioner=true in profile "no-preload-491054"
	I1222 01:43:37.634361 2142162 addons.go:239] Setting addon storage-provisioner=true in "no-preload-491054"
	I1222 01:43:37.634393 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.634429 2142162 addons.go:70] Setting dashboard=true in profile "no-preload-491054"
	I1222 01:43:37.634454 2142162 addons.go:239] Setting addon dashboard=true in "no-preload-491054"
	W1222 01:43:37.634473 2142162 addons.go:248] addon dashboard should already be in state true
	I1222 01:43:37.634514 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.634848 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.635327 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.636958 2142162 addons.go:70] Setting default-storageclass=true in profile "no-preload-491054"
	I1222 01:43:37.636991 2142162 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-491054"
	I1222 01:43:37.637296 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.637634 2142162 out.go:179] * Verifying Kubernetes components...
	I1222 01:43:37.640709 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:37.689115 2142162 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:43:37.692012 2142162 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:43:37.693731 2142162 addons.go:239] Setting addon default-storageclass=true in "no-preload-491054"
	I1222 01:43:37.693777 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.694198 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.694799 2142162 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:43:37.694913 2142162 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:37.694931 2142162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:43:37.694979 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.697616 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:43:37.697665 2142162 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:43:37.697851 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.735798 2142162 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:37.735820 2142162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:43:37.735884 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.742160 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.745661 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.771276 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.878826 2142162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:43:37.905903 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:43:37.905939 2142162 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:43:37.908326 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:37.924424 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:37.931427 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:43:37.931493 2142162 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:43:37.987971 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:43:37.988038 2142162 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:43:38.013982 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:43:38.014064 2142162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:43:38.030075 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:43:38.030106 2142162 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:43:38.044755 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:43:38.044781 2142162 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:43:38.060102 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:43:38.060127 2142162 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:43:38.074803 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:43:38.074827 2142162 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:43:38.089429 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:38.089492 2142162 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:43:38.107550 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:38.596163 2142162 node_ready.go:35] waiting up to 6m0s for node "no-preload-491054" to be "Ready" ...
	W1222 01:43:38.596533 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:38.596585 2142162 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.596646 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.596869 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:38.842397 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:38.893925 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:38.919126 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:38.920556 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.990827 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.996043 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.180030 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:39.243487 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.507921 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:39.546361 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:39.585095 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:39.637531 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.923912 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:39.969528 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:39.984506 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:40.021662 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:40.047521 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:40.093478 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:40.597265 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:40.934859 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:40.995829 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:40.998914 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:41.002415 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:41.074990 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:41.086742 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:41.721168 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:41.778438 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:41.812650 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:41.884140 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:42.092665 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:42.179951 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:43.096812 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:43.264030 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:43.326256 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:43.941488 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:44.001292 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:44.307335 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:44.364551 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:45.097718 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:45.303907 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:45.386285 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:46.276008 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:46.335276 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:47.596694 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:48.343243 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:48.427142 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:49.596755 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:50.701128 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:50.740652 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:50.774874 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:50.809842 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:51.233104 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:51.289972 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:52.096678 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:43:54.596681 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:55.496576 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:55.556091 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:55.556130 2142162 retry.go:84] will retry after 5.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:57.097745 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:57.830249 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:57.896591 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:59.048455 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:59.117670 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:59.596625 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:00.689639 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:00.748079 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:01.596784 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:04.096680 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:06.096842 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:07.740464 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:07.801397 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:07.801438 2142162 retry.go:84] will retry after 14.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:08.406997 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:08.464207 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:08.596682 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:08.637968 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:08.697473 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:10.596802 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:13.096695 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:15.097712 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:17.597645 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:20.096640 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:22.096730 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:22.136999 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:22.201886 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:22.201932 2142162 retry.go:84] will retry after 31.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:24.596746 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:27.096668 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:28.713976 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:28.773383 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:28.773420 2142162 retry.go:84] will retry after 43.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:29.097212 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:29.449928 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:29.512969 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:31.596698 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:33.597657 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:36.096782 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:38.096835 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:40.596871 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:42.597417 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:44.597559 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:47.097409 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:49.596695 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:50.536518 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:50.603160 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:50.603196 2142162 retry.go:84] will retry after 32.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:51.597438 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:53.597561 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:53.885143 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:53.945074 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:53.945111 2142162 retry.go:84] will retry after 39.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:56.096872 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:58.596737 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:00.597427 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:03.096930 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:05.596787 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:08.096924 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:10.097654 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:12.224247 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:45:12.280984 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:12.281100 2142162 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:45:12.596649 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:14.596729 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:16.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:18.596938 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:21.096848 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:23.505160 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:45:23.571151 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:23.571268 2142162 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:45:23.597779 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:26.096739 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:28.096845 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:30.096928 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:32.596762 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:33.183034 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:45:33.241129 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:33.241240 2142162 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:45:33.244397 2142162 out.go:179] * Enabled addons: 
	I1222 01:45:33.247144 2142162 addons.go:530] duration metric: took 1m55.612898686s for enable addons: enabled=[]
	W1222 01:45:34.596832 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:37.097200 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:39.596789 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:42.097919 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:44.596713 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:46.596836 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:49.097713 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:51.596858 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:54.096783 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:56.096929 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:58.596920 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:00.597362 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:03.096800 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:05.596821 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:08.096838 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:10.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:12.596879 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:15.097761 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:17.597722 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:20.096717 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:22.096938 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:24.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:26.596844 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:28.596907 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:30.597131 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:32.597642 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:35.097686 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:37.596814 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:40.097013 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:42.097822 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:44.596698 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:46.597195 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:48.597596 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:51.096903 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:53.597723 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:56.097666 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:58.596861 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:00.597157 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:03.096859 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:05.096944 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:07.596816 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:10.096866 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:12.097586 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:14.596684 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:16.596795 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:19.096809 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:21.096873 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:23.596821 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:25.596911 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:28.096847 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:30.596691 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:32.596856 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:34.596976 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:37.096855 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:39.097498 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:41.596786 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:44.096683 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:46.097625 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:48.596698 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:50.596974 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:53.096679 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:55.096813 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:57.096878 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:59.597757 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:02.096701 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:04.097773 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:06.596735 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:08.596774 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:10.597281 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:12.597413 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:15.096682 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:17.096772 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:19.096974 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:21.596789 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:24.096754 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:26.096827 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:28.096867 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:30.596875 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:33.096831 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:35.596802 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:37.597647 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:40.096931 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:42.097949 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:44.597158 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:46.597453 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:49.097694 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:51.596668 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:53.597225 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:55.597516 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:57.597678 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.100741 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:02.596826 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:05.096792 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:07.596780 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:09.597714 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:12.096720 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:14.596604 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:16.596756 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:19.096677 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:21.096800 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:23.596617 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:25.596765 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:28.096729 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:30.596693 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:33.096665 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:35.596759 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:38.096842 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:38.596364 2142162 node_ready.go:38] duration metric: took 6m0.000147881s for node "no-preload-491054" to be "Ready" ...
	I1222 01:49:38.599511 2142162 out.go:203] 
	W1222 01:49:38.602362 2142162 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:49:38.602381 2142162 out.go:285] * 
	* 
	W1222 01:49:38.604626 2142162 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:49:38.608399 2142162 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-491054
helpers_test.go:244: (dbg) docker inspect no-preload-491054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	        "Created": "2025-12-22T01:32:55.262645623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2142290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:43:30.530329674Z",
	            "FinishedAt": "2025-12-22T01:43:29.176921862Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hostname",
	        "HostsPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hosts",
	        "LogPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe-json.log",
	        "Name": "/no-preload-491054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	                "LowerDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-491054",
	                "Source": "/var/lib/docker/volumes/no-preload-491054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491054",
	                "name.minikube.sigs.k8s.io": "no-preload-491054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f5325c5c7084a0f9b6fab754824a17e882a8a3c300adc09d5d8d9446a681793",
	            "SandboxKey": "/var/run/docker/netns/4f5325c5c708",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38785"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38783"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:66:ff:3e:a4:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19fc401d8662d0cf8941c7bbec5156d3018f76638db0a4459b76764a03803463",
	                    "EndpointID": "44bb6686738bfe98627c492c33328f969aae00e65930a103715f4085574a78d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491054",
	                        "313e04d6776e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054: exit status 2 (337.494586ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491054 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:41 UTC │                     │
	│ stop    │ -p no-preload-491054 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ addons  │ enable dashboard -p no-preload-491054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-704217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:46 UTC │                     │
	│ stop    │ -p newest-cni-704217 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │ 22 Dec 25 01:48 UTC │
	│ addons  │ enable dashboard -p newest-cni-704217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │ 22 Dec 25 01:48 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:48:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:48:40.242767 2149446 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:48:40.242975 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243012 2149446 out.go:374] Setting ErrFile to fd 2...
	I1222 01:48:40.243032 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243304 2149446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:48:40.243710 2149446 out.go:368] Setting JSON to false
	I1222 01:48:40.244718 2149446 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":117070,"bootTime":1766251051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:48:40.244822 2149446 start.go:143] virtualization:  
	I1222 01:48:40.247655 2149446 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1222 01:48:35.596802 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:37.597647 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:40.096931 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:40.251418 2149446 notify.go:221] Checking for updates...
	I1222 01:48:40.254130 2149446 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:48:40.257153 2149446 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:48:40.259997 2149446 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:40.262948 2149446 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:48:40.265828 2149446 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:48:40.268754 2149446 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:48:40.272145 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:40.272826 2149446 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:48:40.305841 2149446 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:48:40.306001 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.377826 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.367957332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.377936 2149446 docker.go:319] overlay module found
	I1222 01:48:40.381097 2149446 out.go:179] * Using the docker driver based on existing profile
	I1222 01:48:40.383847 2149446 start.go:309] selected driver: docker
	I1222 01:48:40.383862 2149446 start.go:928] validating driver "docker" against &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.383955 2149446 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:48:40.384655 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.450570 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.441705374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.450907 2149446 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:48:40.450934 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:40.450985 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:40.451021 2149446 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.454057 2149446 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:48:40.456938 2149446 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:48:40.459827 2149446 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:48:40.462746 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:40.462788 2149446 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:48:40.462800 2149446 cache.go:65] Caching tarball of preloaded images
	I1222 01:48:40.462841 2149446 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:48:40.462889 2149446 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:48:40.462900 2149446 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:48:40.463007 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.482525 2149446 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:48:40.482547 2149446 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:48:40.482567 2149446 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:48:40.482597 2149446 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:48:40.482656 2149446 start.go:364] duration metric: took 36.504µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:48:40.482682 2149446 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:48:40.482690 2149446 fix.go:54] fixHost starting: 
	I1222 01:48:40.482953 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.499420 2149446 fix.go:112] recreateIfNeeded on newest-cni-704217: state=Stopped err=<nil>
	W1222 01:48:40.499450 2149446 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:48:40.502622 2149446 out.go:252] * Restarting existing docker container for "newest-cni-704217" ...
	I1222 01:48:40.502704 2149446 cli_runner.go:164] Run: docker start newest-cni-704217
	I1222 01:48:40.772557 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.795810 2149446 kic.go:430] container "newest-cni-704217" state is running.
	I1222 01:48:40.796193 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:40.819335 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.819554 2149446 machine.go:94] provisionDockerMachine start ...
	I1222 01:48:40.819841 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:40.841916 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:40.842255 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:40.842265 2149446 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:48:40.844138 2149446 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:48:43.976885 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:43.976911 2149446 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:48:43.977014 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:43.994105 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:43.994420 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:43.994436 2149446 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:48:44.140605 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:44.140693 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.161016 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.161373 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.161391 2149446 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:48:44.293770 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:48:44.293859 2149446 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:48:44.293905 2149446 ubuntu.go:190] setting up certificates
	I1222 01:48:44.293936 2149446 provision.go:84] configureAuth start
	I1222 01:48:44.294013 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:44.316482 2149446 provision.go:143] copyHostCerts
	I1222 01:48:44.316547 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:48:44.316556 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:48:44.316633 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:48:44.316728 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:48:44.316733 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:48:44.316759 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:48:44.316806 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:48:44.316811 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:48:44.316833 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:48:44.316876 2149446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:48:44.455342 2149446 provision.go:177] copyRemoteCerts
	I1222 01:48:44.455417 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:48:44.455467 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.474519 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:44.573438 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:48:44.590713 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:48:44.609690 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:48:44.627314 2149446 provision.go:87] duration metric: took 333.343615ms to configureAuth
	I1222 01:48:44.627344 2149446 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:48:44.627539 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:44.627653 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.649833 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.650146 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.650167 2149446 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:48:44.926182 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:48:44.926204 2149446 machine.go:97] duration metric: took 4.106640691s to provisionDockerMachine
	I1222 01:48:44.926234 2149446 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:48:44.926251 2149446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:48:44.926311 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:48:44.926356 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.946595 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.045827 2149446 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:48:45.062421 2149446 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:48:45.062452 2149446 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:48:45.062466 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:48:45.062535 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:48:45.062625 2149446 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:48:45.062747 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:48:45.075486 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:45.105717 2149446 start.go:296] duration metric: took 179.465684ms for postStartSetup
	I1222 01:48:45.105860 2149446 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:48:45.105935 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.131206 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	W1222 01:48:42.097949 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:44.597158 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:45.246727 2149446 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:48:45.256769 2149446 fix.go:56] duration metric: took 4.774070481s for fixHost
	I1222 01:48:45.256799 2149446 start.go:83] releasing machines lock for "newest-cni-704217", held for 4.774128563s
	I1222 01:48:45.256887 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:45.279239 2149446 ssh_runner.go:195] Run: cat /version.json
	I1222 01:48:45.279292 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.279295 2149446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:48:45.279352 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.302892 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.315026 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.495990 2149446 ssh_runner.go:195] Run: systemctl --version
	I1222 01:48:45.502719 2149446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:48:45.539801 2149446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:48:45.545121 2149446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:48:45.545189 2149446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:48:45.553556 2149446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:48:45.553582 2149446 start.go:496] detecting cgroup driver to use...
	I1222 01:48:45.553612 2149446 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:48:45.553661 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:48:45.568706 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:48:45.581738 2149446 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:48:45.581818 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:48:45.597902 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:48:45.614693 2149446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:48:45.735512 2149446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:48:45.846918 2149446 docker.go:234] disabling docker service ...
	I1222 01:48:45.847000 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:48:45.861997 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:48:45.874967 2149446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:48:45.982721 2149446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:48:46.093378 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:48:46.106471 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:48:46.120288 2149446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:48:46.120369 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.130327 2149446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:48:46.130417 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.139300 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.148385 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.157463 2149446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:48:46.165914 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.175804 2149446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.185241 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.194290 2149446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:48:46.201953 2149446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:48:46.209542 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.327150 2149446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:48:46.504853 2149446 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:48:46.504948 2149446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:48:46.509123 2149446 start.go:564] Will wait 60s for crictl version
	I1222 01:48:46.509240 2149446 ssh_runner.go:195] Run: which crictl
	I1222 01:48:46.513227 2149446 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:48:46.539190 2149446 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:48:46.539329 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.570401 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.603161 2149446 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:48:46.606112 2149446 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:48:46.622587 2149446 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:48:46.626479 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.639159 2149446 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:48:46.642073 2149446 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:48:46.642257 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:46.642336 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.682401 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.682426 2149446 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:48:46.682483 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.712357 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.712379 2149446 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:48:46.712388 2149446 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:48:46.712483 2149446 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:48:46.712573 2149446 ssh_runner.go:195] Run: crio config
	I1222 01:48:46.797974 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:46.798006 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:46.798025 2149446 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:48:46.798055 2149446 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:48:46.798231 2149446 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:48:46.798309 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:48:46.806374 2149446 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:48:46.806461 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:48:46.814198 2149446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:48:46.828274 2149446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:48:46.841483 2149446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:48:46.854374 2149446 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:48:46.857976 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.868061 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.994841 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:47.017882 2149446 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:48:47.017947 2149446 certs.go:195] generating shared ca certs ...
	I1222 01:48:47.017976 2149446 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.018155 2149446 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:48:47.018235 2149446 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:48:47.018271 2149446 certs.go:257] generating profile certs ...
	I1222 01:48:47.018410 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:48:47.018519 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:48:47.018584 2149446 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:48:47.018730 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:48:47.018786 2149446 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:48:47.018810 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:48:47.018867 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:48:47.018918 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:48:47.018973 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:48:47.019053 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:47.019666 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:48:47.038515 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:48:47.057188 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:48:47.080288 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:48:47.101214 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:48:47.120685 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:48:47.140350 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:48:47.160406 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:48:47.178292 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:48:47.195480 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:48:47.213647 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:48:47.231427 2149446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:48:47.244572 2149446 ssh_runner.go:195] Run: openssl version
	I1222 01:48:47.250948 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.258307 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:48:47.265613 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269481 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.314712 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:48:47.322028 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.329298 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:48:47.336716 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340415 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340535 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.381670 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:48:47.389031 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.396101 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:48:47.403471 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407346 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.449487 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:48:47.456987 2149446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:48:47.460641 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:48:47.501381 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:48:47.542911 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:48:47.587141 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:48:47.628220 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:48:47.670351 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:48:47.715724 2149446 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:47.715830 2149446 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:48:47.715926 2149446 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:48:47.743013 2149446 cri.go:96] found id: ""
	I1222 01:48:47.743133 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:48:47.750858 2149446 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:48:47.750922 2149446 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:48:47.750979 2149446 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:48:47.758109 2149446 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:48:47.758748 2149446 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.759013 2149446 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-704217" cluster setting kubeconfig missing "newest-cni-704217" context setting]
	I1222 01:48:47.759473 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.760858 2149446 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:48:47.768601 2149446 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:48:47.768668 2149446 kubeadm.go:602] duration metric: took 17.733602ms to restartPrimaryControlPlane
	I1222 01:48:47.768685 2149446 kubeadm.go:403] duration metric: took 52.971018ms to StartCluster
	I1222 01:48:47.768701 2149446 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.768757 2149446 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.769685 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.769905 2149446 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:48:47.770207 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:47.770291 2149446 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:48:47.770386 2149446 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-704217"
	I1222 01:48:47.770417 2149446 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-704217"
	I1222 01:48:47.770506 2149446 addons.go:70] Setting dashboard=true in profile "newest-cni-704217"
	I1222 01:48:47.770541 2149446 addons.go:70] Setting default-storageclass=true in profile "newest-cni-704217"
	I1222 01:48:47.770547 2149446 addons.go:239] Setting addon dashboard=true in "newest-cni-704217"
	W1222 01:48:47.770588 2149446 addons.go:248] addon dashboard should already be in state true
	I1222 01:48:47.770610 2149446 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-704217"
	I1222 01:48:47.770621 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.770999 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.771324 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.770518 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.772038 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.775384 2149446 out.go:179] * Verifying Kubernetes components...
	I1222 01:48:47.778350 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:47.802410 2149446 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:48:47.805256 2149446 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:48:47.807963 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:48:47.807985 2149446 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:48:47.808059 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.828399 2149446 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:48:47.831382 2149446 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:47.831415 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:48:47.831508 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.834144 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.844610 2149446 addons.go:239] Setting addon default-storageclass=true in "newest-cni-704217"
	I1222 01:48:47.844670 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.845187 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.877451 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.891162 2149446 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:47.891205 2149446 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:48:47.891280 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.926288 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:48.042819 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:48.112488 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:48:48.112513 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:48:48.128623 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:48:48.128699 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:48:48.148283 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:48:48.148321 2149446 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:48:48.156766 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:48.161900 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:48.167407 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:48:48.167429 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:48:48.203144 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:48:48.203175 2149446 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:48:48.254999 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:48:48.255073 2149446 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:48:48.269219 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:48:48.269248 2149446 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:48:48.282496 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:48:48.282561 2149446 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:48:48.295350 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.295416 2149446 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:48:48.308102 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.846401 2149446 api_server.go:52] waiting for apiserver process to appear ...
	W1222 01:48:48.846466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:48.846520 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:48.846522 2149446 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846636 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846839 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.013510 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:49.073085 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.109119 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:49.170278 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.173753 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.230752 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.347104 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:49.363263 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:49.431947 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.591271 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:49.624743 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.676288 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.711818 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.847130 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.002166 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:50.073384 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.206474 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:46.597453 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:49.097694 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:50.265056 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.347283 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.486129 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:50.543791 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.847346 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.143881 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:51.205787 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.283923 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:51.341457 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.542158 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:51.609543 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.847044 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.045399 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:52.115058 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.347521 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.607702 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:52.683195 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.846571 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.305882 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:53.347187 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:53.376466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.687056 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:53.754007 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.847202 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.994728 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:54.057241 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:54.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:54.846622 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:51.596668 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:53.597225 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:55.347576 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:55.800828 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:55.847504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:55.860654 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.347301 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:56.472869 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:56.531116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.846537 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.346757 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.706997 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:57.770163 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:57.847151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.347452 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.521177 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:58.583041 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:58.847501 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.347147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.847527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.918746 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:59.977904 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.192217 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:55.597516 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:57.597678 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.100741 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.333722 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.346752 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:00.847514 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.346636 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.846688 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.347266 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.846533 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.346590 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.346607 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.761472 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:49:04.773029 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:49:04.847157 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:04.857098 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.857138 2149446 retry.go:84] will retry after 7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:04.857185 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.879231 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:04.950615 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:02.596826 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:05.096792 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:05.347388 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:05.847467 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.346572 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.846691 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.347421 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.347439 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.847592 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.847473 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:07.596780 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:09.597714 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:10.346839 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:10.642468 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:10.708302 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.708346 2149446 retry.go:84] will retry after 9.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.847493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.346650 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846502 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:11.932894 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:12.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:12.847370 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.346723 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.846605 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.347318 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.846610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:12.096720 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:14.596604 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:15.276803 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:15.337116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.337152 2149446 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.347320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:15.847453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.846547 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.347494 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.847450 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.347436 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.846663 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.347534 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.846595 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:16.596756 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:19.096677 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:20.347375 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:20.550393 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:20.629907 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.629948 2149446 retry.go:84] will retry after 13.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.847352 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.347359 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.847541 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.346619 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.846606 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.347285 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.847257 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.346695 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.847195 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:21.096800 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:23.596617 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:25.347322 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:25.847511 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.346649 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.847539 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.261998 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:27.328263 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.328301 2149446 retry.go:84] will retry after 13.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.347392 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.847138 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.346635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.547391 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:28.610028 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:28.847445 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.346661 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.847366 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:25.596765 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:28.096729 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:30.346864 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:30.846648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.846643 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.846656 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.346602 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.846639 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.494891 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:34.556245 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.556282 2149446 retry.go:84] will retry after 24.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.846617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:30.596693 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:33.096665 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:35.596759 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:38.096842 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:38.596364 2142162 node_ready.go:38] duration metric: took 6m0.000147881s for node "no-preload-491054" to be "Ready" ...
	I1222 01:49:38.599511 2142162 out.go:203] 
	W1222 01:49:38.602362 2142162 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:49:38.602381 2142162 out.go:285] * 
	W1222 01:49:38.604626 2142162 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:49:38.608399 2142162 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361841003Z" level=info msg="Using the internal default seccomp profile"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361848683Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.36185418Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361860416Z" level=info msg="RDT not available in the host system"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361873446Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362571854Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362593843Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362610154Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363331199Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363355166Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363489102Z" level=info msg="Updated default CNI network name to "
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364161575Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364588492Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364649191Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426273648Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426307715Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426348477Z" level=info msg="Create NRI interface"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426482331Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426498232Z" level=info msg="runtime interface created"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426511188Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426517128Z" level=info msg="runtime interface starting up..."
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426522773Z" level=info msg="starting plugins..."
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426536213Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426597914Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:43:36 no-preload-491054 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:39.699295    4041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:39.700078    4041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:39.701775    4041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:39.702086    4041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:39.703508    4041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:49:39 up 1 day,  8:32,  0 user,  load average: 0.52, 0.83, 1.43
	Linux no-preload-491054 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:49:37 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:49:38 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 22 01:49:38 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:49:38 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:49:38 no-preload-491054 kubelet[3924]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:49:38 no-preload-491054 kubelet[3924]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:49:38 no-preload-491054 kubelet[3924]: E1222 01:49:38.139410    3924 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:49:38 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:49:38 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:49:38 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 22 01:49:38 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:49:38 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:49:38 no-preload-491054 kubelet[3929]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:49:38 no-preload-491054 kubelet[3929]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:49:38 no-preload-491054 kubelet[3929]: E1222 01:49:38.918243    3929 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:49:38 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:49:38 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:49:39 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 22 01:49:39 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:49:39 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:49:39 no-preload-491054 kubelet[4024]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:49:39 no-preload-491054 kubelet[4024]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:49:39 no-preload-491054 kubelet[4024]: E1222 01:49:39.650479    4024 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:49:39 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:49:39 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 2 (376.013725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (102.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-704217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1222 01:47:01.691492 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:47:03.284412 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:47:29.377387 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:47:43.835373 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-704217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m41.343879297s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-704217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-704217
helpers_test.go:244: (dbg) docker inspect newest-cni-704217:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	        "Created": "2025-12-22T01:38:39.533406818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2134718,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:38:39.609427135Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hosts",
	        "LogPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e-json.log",
	        "Name": "/newest-cni-704217",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-704217:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-704217",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	                "LowerDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-704217",
	                "Source": "/var/lib/docker/volumes/newest-cni-704217/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-704217",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-704217",
	                "name.minikube.sigs.k8s.io": "newest-cni-704217",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f1dc425bcc5666fe6d4a2f24c42949905dae75aa0fe97b26897c73e98810389",
	            "SandboxKey": "/var/run/docker/netns/8f1dc425bcc5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38776"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38777"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38780"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38778"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38779"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-704217": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:1f:25:28:ad:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37c5f2b876d48064d8ed7b8ba7f6e9a5833c911bae1041e13f11dbc7e56a4324",
	                    "EndpointID": "936fe31e2a62e192e6e8b7fb0777030ed5d6eddd56a85d9ab0417f54428bd2bc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-704217",
	                        "0d8468deabde"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217: exit status 6 (342.930721ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:48:37.465655 2148928 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-704217 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:33 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-235567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p embed-certs-235567 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:41 UTC │                     │
	│ stop    │ -p no-preload-491054 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ addons  │ enable dashboard -p no-preload-491054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-704217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:46 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:43:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:43:30.250445 2142162 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:43:30.250555 2142162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:43:30.250568 2142162 out.go:374] Setting ErrFile to fd 2...
	I1222 01:43:30.250578 2142162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:43:30.250816 2142162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:43:30.251169 2142162 out.go:368] Setting JSON to false
	I1222 01:43:30.252022 2142162 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116760,"bootTime":1766251051,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:43:30.252085 2142162 start.go:143] virtualization:  
	I1222 01:43:30.256993 2142162 out.go:179] * [no-preload-491054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:43:30.260139 2142162 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:43:30.260239 2142162 notify.go:221] Checking for updates...
	I1222 01:43:30.265992 2142162 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:43:30.268823 2142162 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:30.271754 2142162 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:43:30.274654 2142162 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:43:30.277521 2142162 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:43:30.280920 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:30.281572 2142162 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:43:30.304144 2142162 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:43:30.304273 2142162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:43:30.380279 2142162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:43:30.370426413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:43:30.380380 2142162 docker.go:319] overlay module found
	I1222 01:43:30.383492 2142162 out.go:179] * Using the docker driver based on existing profile
	I1222 01:43:30.386252 2142162 start.go:309] selected driver: docker
	I1222 01:43:30.386270 2142162 start.go:928] validating driver "docker" against &{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:30.386379 2142162 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:43:30.387093 2142162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:43:30.443809 2142162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:43:30.434314894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:43:30.444139 2142162 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:43:30.444167 2142162 cni.go:84] Creating CNI manager for ""
	I1222 01:43:30.444230 2142162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:43:30.444275 2142162 start.go:353] cluster config:
	{Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:30.449118 2142162 out.go:179] * Starting "no-preload-491054" primary control-plane node in "no-preload-491054" cluster
	I1222 01:43:30.451889 2142162 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:43:30.454738 2142162 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:43:30.457645 2142162 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:43:30.457719 2142162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:43:30.457790 2142162 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:43:30.458097 2142162 cache.go:107] acquiring lock: {Name:mkce8c58631c5f0d23bf68d20d333345f25cc3e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458179 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:43:30.458197 2142162 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.314µs
	I1222 01:43:30.458210 2142162 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:43:30.458227 2142162 cache.go:107] acquiring lock: {Name:mk4201be15d3fc5460d5a3841ee13ef2c95ea347 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458267 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:43:30.458277 2142162 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 52.495µs
	I1222 01:43:30.458284 2142162 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458304 2142162 cache.go:107] acquiring lock: {Name:mkf2e3bf6867cb15e485fef5b9e233e7559c99a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458336 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:43:30.458345 2142162 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 43.535µs
	I1222 01:43:30.458353 2142162 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458369 2142162 cache.go:107] acquiring lock: {Name:mkb8446b3e0ab7883b46b478905fd798f55f01e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458400 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:43:30.458409 2142162 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 41.172µs
	I1222 01:43:30.458449 2142162 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458470 2142162 cache.go:107] acquiring lock: {Name:mk77482801a8c57f7731abc5d9887216f12ba8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458530 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:43:30.458546 2142162 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 77.315µs
	I1222 01:43:30.458553 2142162 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:43:30.458608 2142162 cache.go:107] acquiring lock: {Name:mk692f79fde95c3755a4408e63ab002578541c74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458687 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:43:30.458702 2142162 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 113.81µs
	I1222 01:43:30.458726 2142162 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:43:30.458740 2142162 cache.go:107] acquiring lock: {Name:mkd233ea519aa3a2daed97ba862dcfb34d3be39d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458851 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:43:30.458865 2142162 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 126.815µs
	I1222 01:43:30.458872 2142162 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:43:30.458914 2142162 cache.go:107] acquiring lock: {Name:mk309b91ac5e9b8dc5721727680b39f391c5db3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.458983 2142162 cache.go:115] /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:43:30.458997 2142162 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 92.724µs
	I1222 01:43:30.459005 2142162 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:43:30.459012 2142162 cache.go:87] Successfully saved all images to host disk.
	I1222 01:43:30.477002 2142162 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:43:30.477024 2142162 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:43:30.477043 2142162 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:43:30.477072 2142162 start.go:360] acquireMachinesLock for no-preload-491054: {Name:mk963c2c890d85a550b13090cd7467d59e3caf6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:43:30.477129 2142162 start.go:364] duration metric: took 36.634µs to acquireMachinesLock for "no-preload-491054"
	I1222 01:43:30.477153 2142162 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:43:30.477163 2142162 fix.go:54] fixHost starting: 
	I1222 01:43:30.477492 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:30.493752 2142162 fix.go:112] recreateIfNeeded on no-preload-491054: state=Stopped err=<nil>
	W1222 01:43:30.493785 2142162 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:43:30.496979 2142162 out.go:252] * Restarting existing docker container for "no-preload-491054" ...
	I1222 01:43:30.497063 2142162 cli_runner.go:164] Run: docker start no-preload-491054
	I1222 01:43:30.758339 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:30.783924 2142162 kic.go:430] container "no-preload-491054" state is running.
	I1222 01:43:30.784895 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:30.806120 2142162 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/config.json ...
	I1222 01:43:30.806345 2142162 machine.go:94] provisionDockerMachine start ...
	I1222 01:43:30.806411 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:30.836679 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:30.837006 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:30.837015 2142162 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:43:30.839411 2142162 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:43:33.973258 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:43:33.973282 2142162 ubuntu.go:182] provisioning hostname "no-preload-491054"
	I1222 01:43:33.973370 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:33.991448 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:33.991768 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:33.991785 2142162 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-491054 && echo "no-preload-491054" | sudo tee /etc/hostname
	I1222 01:43:34.134668 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-491054
	
	I1222 01:43:34.134773 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.153303 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:34.153657 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:34.153681 2142162 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491054/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:43:34.285755 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:43:34.285786 2142162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:43:34.285816 2142162 ubuntu.go:190] setting up certificates
	I1222 01:43:34.285832 2142162 provision.go:84] configureAuth start
	I1222 01:43:34.285896 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:34.304161 2142162 provision.go:143] copyHostCerts
	I1222 01:43:34.304238 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:43:34.304271 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:43:34.304354 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:43:34.304468 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:43:34.304480 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:43:34.304508 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:43:34.304576 2142162 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:43:34.304585 2142162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:43:34.304610 2142162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:43:34.304663 2142162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.no-preload-491054 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-491054]
	I1222 01:43:34.369088 2142162 provision.go:177] copyRemoteCerts
	I1222 01:43:34.369157 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:43:34.369200 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.387117 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:34.489125 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:43:34.507509 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:43:34.526028 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:43:34.544740 2142162 provision.go:87] duration metric: took 258.882924ms to configureAuth
	I1222 01:43:34.544808 2142162 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:43:34.545018 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:34.545137 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.562445 2142162 main.go:144] libmachine: Using SSH client type: native
	I1222 01:43:34.562759 2142162 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38781 <nil> <nil>}
	I1222 01:43:34.562773 2142162 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:43:34.840774 2142162 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:43:34.840838 2142162 machine.go:97] duration metric: took 4.034478045s to provisionDockerMachine
	I1222 01:43:34.840864 2142162 start.go:293] postStartSetup for "no-preload-491054" (driver="docker")
	I1222 01:43:34.840891 2142162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:43:34.841014 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:43:34.841131 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:34.858548 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:34.957415 2142162 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:43:34.960727 2142162 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:43:34.960754 2142162 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:43:34.960765 2142162 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:43:34.960821 2142162 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:43:34.960908 2142162 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:43:34.961020 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:43:34.968556 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:43:34.986959 2142162 start.go:296] duration metric: took 146.06498ms for postStartSetup
	I1222 01:43:34.987050 2142162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:43:34.987089 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.005930 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.110222 2142162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:43:35.118762 2142162 fix.go:56] duration metric: took 4.641592572s for fixHost
	I1222 01:43:35.118836 2142162 start.go:83] releasing machines lock for "no-preload-491054", held for 4.641692655s
	I1222 01:43:35.118944 2142162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491054
	I1222 01:43:35.139078 2142162 ssh_runner.go:195] Run: cat /version.json
	I1222 01:43:35.139134 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.139394 2142162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:43:35.139451 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:35.169405 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.178990 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:35.269046 2142162 ssh_runner.go:195] Run: systemctl --version
	I1222 01:43:35.364120 2142162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:43:35.400545 2142162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:43:35.405009 2142162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:43:35.405089 2142162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:43:35.413041 2142162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:43:35.413064 2142162 start.go:496] detecting cgroup driver to use...
	I1222 01:43:35.413113 2142162 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:43:35.413189 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:43:35.428420 2142162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:43:35.441740 2142162 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:43:35.441807 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:43:35.456961 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:43:35.470063 2142162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:43:35.582136 2142162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:43:35.708479 2142162 docker.go:234] disabling docker service ...
	I1222 01:43:35.708551 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:43:35.723755 2142162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:43:35.736977 2142162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:43:35.868719 2142162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:43:35.995326 2142162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:43:36.015703 2142162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:43:36.031200 2142162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:43:36.031291 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.040788 2142162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:43:36.040868 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.050288 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.059682 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.069142 2142162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:43:36.078174 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.088084 2142162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.097092 2142162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:43:36.106783 2142162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:43:36.114759 2142162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:43:36.122164 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:36.245950 2142162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:43:36.431807 2142162 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:43:36.431892 2142162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:43:36.435793 2142162 start.go:564] Will wait 60s for crictl version
	I1222 01:43:36.435862 2142162 ssh_runner.go:195] Run: which crictl
	I1222 01:43:36.439341 2142162 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:43:36.467888 2142162 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:43:36.467972 2142162 ssh_runner.go:195] Run: crio --version
	I1222 01:43:36.498281 2142162 ssh_runner.go:195] Run: crio --version
	I1222 01:43:36.532998 2142162 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:43:36.535843 2142162 cli_runner.go:164] Run: docker network inspect no-preload-491054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:43:36.552071 2142162 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:43:36.555936 2142162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:43:36.565145 2142162 kubeadm.go:884] updating cluster {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:43:36.565259 2142162 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:43:36.565301 2142162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:43:36.609695 2142162 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:43:36.609715 2142162 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:43:36.609722 2142162 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:43:36.609826 2142162 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:43:36.609901 2142162 ssh_runner.go:195] Run: crio config
	I1222 01:43:36.685397 2142162 cni.go:84] Creating CNI manager for ""
	I1222 01:43:36.685422 2142162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:43:36.685444 2142162 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:43:36.685471 2142162 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491054 NodeName:no-preload-491054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:43:36.685595 2142162 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491054"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:43:36.685674 2142162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:43:36.693362 2142162 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:43:36.693431 2142162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:43:36.700893 2142162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:43:36.713423 2142162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:43:36.726038 2142162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1222 01:43:36.738412 2142162 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:43:36.741844 2142162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:43:36.751009 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:36.872344 2142162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:43:36.895746 2142162 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054 for IP: 192.168.76.2
	I1222 01:43:36.895766 2142162 certs.go:195] generating shared ca certs ...
	I1222 01:43:36.895796 2142162 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:36.895947 2142162 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:43:36.895998 2142162 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:43:36.896010 2142162 certs.go:257] generating profile certs ...
	I1222 01:43:36.896128 2142162 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.key
	I1222 01:43:36.896216 2142162 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key.aef9806a
	I1222 01:43:36.896270 2142162 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key
	I1222 01:43:36.896403 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:43:36.896439 2142162 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:43:36.896451 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:43:36.896492 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:43:36.896520 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:43:36.896557 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:43:36.896612 2142162 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:43:36.901156 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:43:36.921532 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:43:36.938876 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:43:36.955891 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:43:36.973839 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:43:36.990503 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:43:37.012123 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:43:37.035419 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:43:37.054551 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:43:37.071987 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:43:37.089400 2142162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:43:37.106223 2142162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:43:37.118959 2142162 ssh_runner.go:195] Run: openssl version
	I1222 01:43:37.125211 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.132736 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:43:37.140145 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.143651 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.143715 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:43:37.184794 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:43:37.192255 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.199933 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:43:37.207517 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.211242 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.211312 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:43:37.252148 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:43:37.259423 2142162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.266494 2142162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:43:37.274006 2142162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.277730 2142162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.277799 2142162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:43:37.318687 2142162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:43:37.326275 2142162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:43:37.330267 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:43:37.371711 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:43:37.412636 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:43:37.455494 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:43:37.496495 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:43:37.537537 2142162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:43:37.578877 2142162 kubeadm.go:401] StartCluster: {Name:no-preload-491054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-491054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:43:37.578975 2142162 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:43:37.579049 2142162 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:43:37.606764 2142162 cri.go:96] found id: ""
	I1222 01:43:37.606851 2142162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:43:37.614979 2142162 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:43:37.615050 2142162 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:43:37.615145 2142162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:43:37.622476 2142162 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:43:37.622933 2142162 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-491054" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:37.623103 2142162 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-491054" cluster setting kubeconfig missing "no-preload-491054" context setting]
	I1222 01:43:37.623423 2142162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.624922 2142162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:43:37.632919 2142162 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:43:37.632949 2142162 kubeadm.go:602] duration metric: took 17.880466ms to restartPrimaryControlPlane
	I1222 01:43:37.632959 2142162 kubeadm.go:403] duration metric: took 54.094028ms to StartCluster
	I1222 01:43:37.632974 2142162 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.633031 2142162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:43:37.633700 2142162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:43:37.633896 2142162 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:43:37.634282 2142162 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:43:37.634245 2142162 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:43:37.634339 2142162 addons.go:70] Setting storage-provisioner=true in profile "no-preload-491054"
	I1222 01:43:37.634361 2142162 addons.go:239] Setting addon storage-provisioner=true in "no-preload-491054"
	I1222 01:43:37.634393 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.634429 2142162 addons.go:70] Setting dashboard=true in profile "no-preload-491054"
	I1222 01:43:37.634454 2142162 addons.go:239] Setting addon dashboard=true in "no-preload-491054"
	W1222 01:43:37.634473 2142162 addons.go:248] addon dashboard should already be in state true
	I1222 01:43:37.634514 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.634848 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.635327 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.636958 2142162 addons.go:70] Setting default-storageclass=true in profile "no-preload-491054"
	I1222 01:43:37.636991 2142162 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-491054"
	I1222 01:43:37.637296 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.637634 2142162 out.go:179] * Verifying Kubernetes components...
	I1222 01:43:37.640709 2142162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:43:37.689115 2142162 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:43:37.692012 2142162 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:43:37.693731 2142162 addons.go:239] Setting addon default-storageclass=true in "no-preload-491054"
	I1222 01:43:37.693777 2142162 host.go:66] Checking if "no-preload-491054" exists ...
	I1222 01:43:37.694198 2142162 cli_runner.go:164] Run: docker container inspect no-preload-491054 --format={{.State.Status}}
	I1222 01:43:37.694799 2142162 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:43:37.694913 2142162 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:37.694931 2142162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:43:37.694979 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.697616 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:43:37.697665 2142162 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:43:37.697851 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.735798 2142162 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:37.735820 2142162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:43:37.735884 2142162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491054
	I1222 01:43:37.742160 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.745661 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.771276 2142162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38781 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/no-preload-491054/id_rsa Username:docker}
	I1222 01:43:37.878826 2142162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:43:37.905903 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:43:37.905939 2142162 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:43:37.908326 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:37.924424 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:37.931427 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:43:37.931493 2142162 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:43:37.987971 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:43:37.988038 2142162 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:43:38.013982 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:43:38.014064 2142162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:43:38.030075 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:43:38.030106 2142162 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:43:38.044755 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:43:38.044781 2142162 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:43:38.060102 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:43:38.060127 2142162 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:43:38.074803 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:43:38.074827 2142162 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:43:38.089429 2142162 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:38.089492 2142162 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:43:38.107550 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:38.596163 2142162 node_ready.go:35] waiting up to 6m0s for node "no-preload-491054" to be "Ready" ...
	W1222 01:43:38.596533 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:38.596585 2142162 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.596646 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.596869 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:38.842397 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:38.893925 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:38.919126 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:38.920556 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.990827 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:38.996043 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.180030 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:39.243487 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.507921 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:43:39.546361 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:39.585095 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:39.637531 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:39.923912 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:39.969528 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:39.984506 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:40.021662 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:40.047521 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:40.093478 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:40.597265 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:40.934859 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:40.995829 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:40.998914 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:43:41.002415 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:41.074990 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:41.086742 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:41.721168 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:41.778438 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:41.812650 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:41.884140 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:42.092665 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:42.179951 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:43.096812 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:43.264030 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:43.326256 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:43.941488 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:44.001292 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:44.307335 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:44.364551 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:45.097718 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:45.303907 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:45.386285 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:46.276008 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:46.335276 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:47.596694 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:48.343243 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:48.427142 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:49.596755 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:50.701128 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:43:50.740652 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:50.774874 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:50.809842 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:51.233104 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:51.289972 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:52.096678 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:43:54.596681 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:55.496576 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:43:55.556091 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:55.556130 2142162 retry.go:84] will retry after 5.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:57.097745 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:43:57.830249 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:43:57.896591 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:43:59.048455 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:43:59.117670 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:43:59.596625 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:00.689639 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:00.748079 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:01.596784 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:04.096680 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:06.096842 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:07.740464 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:07.801397 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:07.801438 2142162 retry.go:84] will retry after 14.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:08.406997 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:08.464207 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:08.596682 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:08.637968 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:08.697473 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:10.596802 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:13.096695 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:15.097712 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:17.597645 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:20.096640 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:22.096730 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:22.136999 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:22.201886 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:22.201932 2142162 retry.go:84] will retry after 31.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:24.596746 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:27.096668 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:28.713976 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:44:28.773383 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:28.773420 2142162 retry.go:84] will retry after 43.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:29.097212 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:29.449928 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:29.512969 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:31.596698 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:33.597657 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:36.096782 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:38.096835 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:40.596871 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:42.597417 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:44.597559 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:47.097409 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:49.596695 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:50.536518 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:44:50.603160 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:50.603196 2142162 retry.go:84] will retry after 32.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:51.597438 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:53.597561 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:44:53.885143 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:44:53.945074 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:44:53.945111 2142162 retry.go:84] will retry after 39.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:44:56.096872 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:44:58.596737 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:00.597427 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:03.096930 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:05.596787 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:08.096924 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:10.097654 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:12.224247 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:45:12.280984 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:12.281100 2142162 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:45:12.596649 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:14.596729 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:16.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:18.596938 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:21.096848 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:23.505160 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:45:23.571151 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:23.571268 2142162 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:45:23.597779 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:26.096739 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:28.096845 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:30.096928 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:32.596762 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:45:33.183034 2142162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:45:33.241129 2142162 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:45:33.241240 2142162 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:45:33.244397 2142162 out.go:179] * Enabled addons: 
	I1222 01:45:33.247144 2142162 addons.go:530] duration metric: took 1m55.612898686s for enable addons: enabled=[]
	W1222 01:45:34.596832 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:37.097200 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:39.596789 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:42.097919 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:44.596713 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:46.596836 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:49.097713 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:51.596858 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:54.096783 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:56.096929 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:45:58.596920 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:00.597362 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:03.096800 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:05.596821 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:08.096838 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:10.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:12.596879 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:15.097761 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:17.597722 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:20.096717 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:22.096938 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:24.596790 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:26.596844 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:28.596907 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:30.597131 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:32.597642 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:35.097686 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:37.596814 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:40.097013 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:42.097822 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:44.596698 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:46.597195 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:48.597596 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:46:53.725590 2134286 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001122316s
	I1222 01:46:53.725621 2134286 kubeadm.go:319] 
	I1222 01:46:53.725676 2134286 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:46:53.725720 2134286 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:46:53.725879 2134286 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:46:53.725891 2134286 kubeadm.go:319] 
	I1222 01:46:53.725990 2134286 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:46:53.726023 2134286 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:46:53.726055 2134286 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:46:53.726063 2134286 kubeadm.go:319] 
	I1222 01:46:53.729947 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:46:53.730369 2134286 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:46:53.730481 2134286 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:46:53.730720 2134286 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:46:53.730733 2134286 kubeadm.go:319] 
	I1222 01:46:53.730802 2134286 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:46:53.730854 2134286 kubeadm.go:403] duration metric: took 8m6.124408914s to StartCluster
	I1222 01:46:53.730901 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:46:53.730968 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:46:53.759901 2134286 cri.go:96] found id: ""
	I1222 01:46:53.759933 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.759942 2134286 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:46:53.759948 2134286 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:46:53.760012 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:46:53.785167 2134286 cri.go:96] found id: ""
	I1222 01:46:53.785192 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.785201 2134286 logs.go:284] No container was found matching "etcd"
	I1222 01:46:53.785208 2134286 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:46:53.785267 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:46:53.810142 2134286 cri.go:96] found id: ""
	I1222 01:46:53.810168 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.810177 2134286 logs.go:284] No container was found matching "coredns"
	I1222 01:46:53.810184 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:46:53.810243 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:46:53.839873 2134286 cri.go:96] found id: ""
	I1222 01:46:53.839895 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.839903 2134286 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:46:53.839910 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:46:53.839968 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:46:53.874440 2134286 cri.go:96] found id: ""
	I1222 01:46:53.874526 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.874548 2134286 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:46:53.874593 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:46:53.874690 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:46:53.926286 2134286 cri.go:96] found id: ""
	I1222 01:46:53.926309 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.926318 2134286 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:46:53.926324 2134286 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:46:53.926384 2134286 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:46:53.961693 2134286 cri.go:96] found id: ""
	I1222 01:46:53.961717 2134286 logs.go:282] 0 containers: []
	W1222 01:46:53.961726 2134286 logs.go:284] No container was found matching "kindnet"
	I1222 01:46:53.961735 2134286 logs.go:123] Gathering logs for kubelet ...
	I1222 01:46:53.961747 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:46:54.027665 2134286 logs.go:123] Gathering logs for dmesg ...
	I1222 01:46:54.027701 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:46:54.044975 2134286 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:46:54.045007 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:46:54.111999 2134286 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:46:54.103598    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.104026    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.105730    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.106387    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.107891    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:46:54.103598    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.104026    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.105730    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.106387    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:46:54.107891    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:46:54.112026 2134286 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:46:54.112038 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:46:54.144880 2134286 logs.go:123] Gathering logs for container status ...
	I1222 01:46:54.144914 2134286 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:46:54.176206 2134286 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:46:54.176269 2134286 out.go:285] * 
	W1222 01:46:54.176328 2134286 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:46:54.176346 2134286 out.go:285] * 
	W1222 01:46:54.178676 2134286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:46:54.184460 2134286 out.go:203] 
	W1222 01:46:54.187291 2134286 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001122316s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:46:54.187335 2134286 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:46:54.187358 2134286 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:46:54.190511 2134286 out.go:203] 
	W1222 01:46:51.096903 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:53.597723 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:56.097666 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:46:58.596861 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:00.597157 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:03.096859 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:05.096944 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:07.596816 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:10.096866 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:12.097586 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:14.596684 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:16.596795 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:19.096809 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:21.096873 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:23.596821 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:25.596911 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:28.096847 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:30.596691 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:32.596856 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:34.596976 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:37.096855 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:39.097498 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:41.596786 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:44.096683 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:46.097625 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:48.596698 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:50.596974 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:53.096679 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:55.096813 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:57.096878 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:47:59.597757 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:02.096701 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:04.097773 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:06.596735 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:08.596774 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:10.597281 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:12.597413 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:15.096682 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:17.096772 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:19.096974 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:21.596789 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:24.096754 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:26.096827 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:28.096867 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:30.596875 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:33.096831 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186476851Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186517489Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186556651Z" level=info msg="Create NRI interface"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186675893Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186684803Z" level=info msg="runtime interface created"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.18669798Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186704995Z" level=info msg="runtime interface starting up..."
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186711428Z" level=info msg="starting plugins..."
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186735969Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:38:46 newest-cni-704217 crio[840]: time="2025-12-22T01:38:46.186808861Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:38:46 newest-cni-704217 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.941801178Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=363e5130-37e7-4794-9a64-87db5602a3b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.942603353Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=01966ee5-85dd-4af2-a16a-e2904ef39b35 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.943190781Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=d1cc02a5-997f-47e2-bf1e-f6898b035ae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.943676001Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=07eb6416-cb40-428d-b4e2-6e11da6ff396 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.944098356Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=27d83e05-afc4-4f4d-9c18-9d6d73e88161 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.94456468Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4c23eaab-b844-4e55-8d5a-8b5f6c5eecc0 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:38:47 newest-cni-704217 crio[840]: time="2025-12-22T01:38:47.945065752Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c7c28c5c-4525-4455-b143-1cb99fff7891 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.388594528Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=d22a8ef1-a27f-4748-8413-486dc5c014b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.389478407Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=c9b2b1e7-6b2f-4965-9da6-e9e955d42f2b name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.390057606Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=dbb891b5-77d5-46d1-a8e0-9d7a1d8e94a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.390520107Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=e02a5d41-e3c3-4923-910e-677d823dc19d name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.393055956Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=9e548051-e2c2-4223-9635-9e7890b00ab4 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.393806801Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cbc0faec-b0b9-4ab6-9d68-74de207b6122 name=/runtime.v1.ImageService/ImageStatus
	Dec 22 01:42:52 newest-cni-704217 crio[840]: time="2025-12-22T01:42:52.394351244Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8d4054a3-5da7-48ed-abef-cdceb6553ab0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:48:38.124805    6143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:48:38.125734    6143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:48:38.130731    6143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:48:38.132117    6143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:48:38.132818    6143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:48:38 up 1 day,  8:31,  0 user,  load average: 0.78, 0.91, 1.50
	Linux newest-cni-704217 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:48:35 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:48:36 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 783.
	Dec 22 01:48:36 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:48:36 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:48:36 newest-cni-704217 kubelet[6035]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:48:36 newest-cni-704217 kubelet[6035]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:48:36 newest-cni-704217 kubelet[6035]: E1222 01:48:36.664203    6035 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:48:36 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:48:36 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:48:37 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 784.
	Dec 22 01:48:37 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:48:37 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:48:37 newest-cni-704217 kubelet[6056]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:48:37 newest-cni-704217 kubelet[6056]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:48:37 newest-cni-704217 kubelet[6056]: E1222 01:48:37.460600    6056 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:48:37 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:48:37 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:48:38 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 785.
	Dec 22 01:48:38 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:48:38 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:48:38 newest-cni-704217 kubelet[6147]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:48:38 newest-cni-704217 kubelet[6147]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:48:38 newest-cni-704217 kubelet[6147]: E1222 01:48:38.185458    6147 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:48:38 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:48:38 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217: exit status 6 (306.58575ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:48:38.614383 2149157 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-704217" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (102.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (374.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 105 (6m9.89052805s)

                                                
                                                
-- stdout --
	* [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:48:40.242767 2149446 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:48:40.242975 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243012 2149446 out.go:374] Setting ErrFile to fd 2...
	I1222 01:48:40.243032 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243304 2149446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:48:40.243710 2149446 out.go:368] Setting JSON to false
	I1222 01:48:40.244718 2149446 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":117070,"bootTime":1766251051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:48:40.244822 2149446 start.go:143] virtualization:  
	I1222 01:48:40.247655 2149446 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:48:40.251418 2149446 notify.go:221] Checking for updates...
	I1222 01:48:40.254130 2149446 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:48:40.257153 2149446 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:48:40.259997 2149446 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:40.262948 2149446 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:48:40.265828 2149446 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:48:40.268754 2149446 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:48:40.272145 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:40.272826 2149446 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:48:40.305841 2149446 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:48:40.306001 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.377826 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.367957332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.377936 2149446 docker.go:319] overlay module found
	I1222 01:48:40.381097 2149446 out.go:179] * Using the docker driver based on existing profile
	I1222 01:48:40.383847 2149446 start.go:309] selected driver: docker
	I1222 01:48:40.383862 2149446 start.go:928] validating driver "docker" against &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.383955 2149446 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:48:40.384655 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.450570 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.441705374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.450907 2149446 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:48:40.450934 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:40.450985 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:40.451021 2149446 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.454057 2149446 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:48:40.456938 2149446 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:48:40.459827 2149446 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:48:40.462746 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:40.462788 2149446 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:48:40.462800 2149446 cache.go:65] Caching tarball of preloaded images
	I1222 01:48:40.462841 2149446 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:48:40.462889 2149446 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:48:40.462900 2149446 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:48:40.463007 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.482525 2149446 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:48:40.482547 2149446 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:48:40.482567 2149446 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:48:40.482597 2149446 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:48:40.482656 2149446 start.go:364] duration metric: took 36.504µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:48:40.482682 2149446 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:48:40.482690 2149446 fix.go:54] fixHost starting: 
	I1222 01:48:40.482953 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.499420 2149446 fix.go:112] recreateIfNeeded on newest-cni-704217: state=Stopped err=<nil>
	W1222 01:48:40.499450 2149446 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:48:40.502622 2149446 out.go:252] * Restarting existing docker container for "newest-cni-704217" ...
	I1222 01:48:40.502704 2149446 cli_runner.go:164] Run: docker start newest-cni-704217
	I1222 01:48:40.772557 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.795810 2149446 kic.go:430] container "newest-cni-704217" state is running.
	I1222 01:48:40.796193 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:40.819335 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.819554 2149446 machine.go:94] provisionDockerMachine start ...
	I1222 01:48:40.819841 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:40.841916 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:40.842255 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:40.842265 2149446 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:48:40.844138 2149446 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:48:43.976885 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:43.976911 2149446 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:48:43.977014 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:43.994105 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:43.994420 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:43.994436 2149446 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:48:44.140605 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:44.140693 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.161016 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.161373 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.161391 2149446 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:48:44.293770 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:48:44.293859 2149446 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:48:44.293905 2149446 ubuntu.go:190] setting up certificates
	I1222 01:48:44.293936 2149446 provision.go:84] configureAuth start
	I1222 01:48:44.294013 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:44.316482 2149446 provision.go:143] copyHostCerts
	I1222 01:48:44.316547 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:48:44.316556 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:48:44.316633 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:48:44.316728 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:48:44.316733 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:48:44.316759 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:48:44.316806 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:48:44.316811 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:48:44.316833 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:48:44.316876 2149446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:48:44.455342 2149446 provision.go:177] copyRemoteCerts
	I1222 01:48:44.455417 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:48:44.455467 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.474519 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:44.573438 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:48:44.590713 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:48:44.609690 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:48:44.627314 2149446 provision.go:87] duration metric: took 333.343615ms to configureAuth
	I1222 01:48:44.627344 2149446 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:48:44.627539 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:44.627653 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.649833 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.650146 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.650167 2149446 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:48:44.926182 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:48:44.926204 2149446 machine.go:97] duration metric: took 4.106640691s to provisionDockerMachine
	I1222 01:48:44.926234 2149446 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:48:44.926251 2149446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:48:44.926311 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:48:44.926356 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.946595 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.045827 2149446 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:48:45.062421 2149446 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:48:45.062452 2149446 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:48:45.062466 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:48:45.062535 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:48:45.062625 2149446 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:48:45.062747 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:48:45.075486 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:45.105717 2149446 start.go:296] duration metric: took 179.465684ms for postStartSetup
	I1222 01:48:45.105860 2149446 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:48:45.105935 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.131206 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.246727 2149446 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:48:45.256769 2149446 fix.go:56] duration metric: took 4.774070481s for fixHost
	I1222 01:48:45.256799 2149446 start.go:83] releasing machines lock for "newest-cni-704217", held for 4.774128563s
	I1222 01:48:45.256887 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:45.279239 2149446 ssh_runner.go:195] Run: cat /version.json
	I1222 01:48:45.279292 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.279295 2149446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:48:45.279352 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.302892 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.315026 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.495990 2149446 ssh_runner.go:195] Run: systemctl --version
	I1222 01:48:45.502719 2149446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:48:45.539801 2149446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:48:45.545121 2149446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:48:45.545189 2149446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:48:45.553556 2149446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:48:45.553582 2149446 start.go:496] detecting cgroup driver to use...
	I1222 01:48:45.553612 2149446 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:48:45.553661 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:48:45.568706 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:48:45.581738 2149446 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:48:45.581818 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:48:45.597902 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:48:45.614693 2149446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:48:45.735512 2149446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:48:45.846918 2149446 docker.go:234] disabling docker service ...
	I1222 01:48:45.847000 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:48:45.861997 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:48:45.874967 2149446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:48:45.982721 2149446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:48:46.093378 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:48:46.106471 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:48:46.120288 2149446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:48:46.120369 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.130327 2149446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:48:46.130417 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.139300 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.148385 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.157463 2149446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:48:46.165914 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.175804 2149446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.185241 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.194290 2149446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:48:46.201953 2149446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:48:46.209542 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.327150 2149446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:48:46.504853 2149446 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:48:46.504948 2149446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:48:46.509123 2149446 start.go:564] Will wait 60s for crictl version
	I1222 01:48:46.509240 2149446 ssh_runner.go:195] Run: which crictl
	I1222 01:48:46.513227 2149446 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:48:46.539190 2149446 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:48:46.539329 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.570401 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.603161 2149446 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:48:46.606112 2149446 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:48:46.622587 2149446 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:48:46.626479 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.639159 2149446 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:48:46.642073 2149446 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:48:46.642257 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:46.642336 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.682401 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.682426 2149446 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:48:46.682483 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.712357 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.712379 2149446 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:48:46.712388 2149446 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:48:46.712483 2149446 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:48:46.712573 2149446 ssh_runner.go:195] Run: crio config
	I1222 01:48:46.797974 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:46.798006 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:46.798025 2149446 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:48:46.798055 2149446 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:48:46.798231 2149446 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:48:46.798309 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:48:46.806374 2149446 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:48:46.806461 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:48:46.814198 2149446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:48:46.828274 2149446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:48:46.841483 2149446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:48:46.854374 2149446 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:48:46.857976 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.868061 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.994841 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:47.017882 2149446 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:48:47.017947 2149446 certs.go:195] generating shared ca certs ...
	I1222 01:48:47.017976 2149446 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.018155 2149446 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:48:47.018235 2149446 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:48:47.018271 2149446 certs.go:257] generating profile certs ...
	I1222 01:48:47.018410 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:48:47.018519 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:48:47.018584 2149446 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:48:47.018730 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:48:47.018786 2149446 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:48:47.018810 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:48:47.018867 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:48:47.018918 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:48:47.018973 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:48:47.019053 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:47.019666 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:48:47.038515 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:48:47.057188 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:48:47.080288 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:48:47.101214 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:48:47.120685 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:48:47.140350 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:48:47.160406 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:48:47.178292 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:48:47.195480 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:48:47.213647 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:48:47.231427 2149446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:48:47.244572 2149446 ssh_runner.go:195] Run: openssl version
	I1222 01:48:47.250948 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.258307 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:48:47.265613 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269481 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.314712 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:48:47.322028 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.329298 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:48:47.336716 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340415 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340535 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.381670 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:48:47.389031 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.396101 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:48:47.403471 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407346 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.449487 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:48:47.456987 2149446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:48:47.460641 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:48:47.501381 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:48:47.542911 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:48:47.587141 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:48:47.628220 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:48:47.670351 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:48:47.715724 2149446 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:47.715830 2149446 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:48:47.715926 2149446 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:48:47.743013 2149446 cri.go:96] found id: ""
	I1222 01:48:47.743133 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:48:47.750858 2149446 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:48:47.750922 2149446 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:48:47.750979 2149446 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:48:47.758109 2149446 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:48:47.758748 2149446 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.759013 2149446 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-704217" cluster setting kubeconfig missing "newest-cni-704217" context setting]
	I1222 01:48:47.759473 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.760858 2149446 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:48:47.768601 2149446 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:48:47.768668 2149446 kubeadm.go:602] duration metric: took 17.733602ms to restartPrimaryControlPlane
	I1222 01:48:47.768685 2149446 kubeadm.go:403] duration metric: took 52.971018ms to StartCluster
	I1222 01:48:47.768701 2149446 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.768757 2149446 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.769685 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.769905 2149446 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:48:47.770207 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:47.770291 2149446 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:48:47.770386 2149446 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-704217"
	I1222 01:48:47.770417 2149446 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-704217"
	I1222 01:48:47.770506 2149446 addons.go:70] Setting dashboard=true in profile "newest-cni-704217"
	I1222 01:48:47.770541 2149446 addons.go:70] Setting default-storageclass=true in profile "newest-cni-704217"
	I1222 01:48:47.770547 2149446 addons.go:239] Setting addon dashboard=true in "newest-cni-704217"
	W1222 01:48:47.770588 2149446 addons.go:248] addon dashboard should already be in state true
	I1222 01:48:47.770610 2149446 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-704217"
	I1222 01:48:47.770621 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.770999 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.771324 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.770518 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.772038 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.775384 2149446 out.go:179] * Verifying Kubernetes components...
	I1222 01:48:47.778350 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:47.802410 2149446 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:48:47.805256 2149446 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:48:47.807963 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:48:47.807985 2149446 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:48:47.808059 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.828399 2149446 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:48:47.831382 2149446 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:47.831415 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:48:47.831508 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.834144 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.844610 2149446 addons.go:239] Setting addon default-storageclass=true in "newest-cni-704217"
	I1222 01:48:47.844670 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.845187 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.877451 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.891162 2149446 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:47.891205 2149446 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:48:47.891280 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.926288 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:48.042819 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:48.112488 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:48:48.112513 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:48:48.128623 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:48:48.128699 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:48:48.148283 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:48:48.148321 2149446 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:48:48.156766 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:48.161900 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:48.167407 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:48:48.167429 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:48:48.203144 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:48:48.203175 2149446 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:48:48.254999 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:48:48.255073 2149446 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:48:48.269219 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:48:48.269248 2149446 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:48:48.282496 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:48:48.282561 2149446 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:48:48.295350 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.295416 2149446 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:48:48.308102 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.846401 2149446 api_server.go:52] waiting for apiserver process to appear ...
	W1222 01:48:48.846466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:48.846520 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:48.846522 2149446 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846636 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846839 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.013510 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:49.073085 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.109119 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:49.170278 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.173753 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.230752 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.347104 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:49.363263 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:49.431947 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.591271 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:49.624743 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.676288 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.711818 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.847130 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.002166 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:50.073384 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.206474 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:50.265056 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.347283 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.486129 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:50.543791 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.847346 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.143881 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:51.205787 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.283923 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:51.341457 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.542158 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:51.609543 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.847044 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.045399 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:52.115058 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.347521 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.607702 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:52.683195 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.846571 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.305882 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:53.347187 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:53.376466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.687056 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:53.754007 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.847202 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.994728 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:54.057241 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:54.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:54.846622 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:55.347576 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:55.800828 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:55.847504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:55.860654 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.347301 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:56.472869 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:56.531116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.846537 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.346757 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.706997 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:57.770163 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:57.847151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.347452 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.521177 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:58.583041 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:58.847501 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.347147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.847527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.918746 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:59.977904 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.192217 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:00.333722 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.346752 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:00.847514 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.346636 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.846688 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.347266 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.846533 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.346590 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.346607 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.761472 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:49:04.773029 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:49:04.847157 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:04.857098 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.857138 2149446 retry.go:84] will retry after 7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:04.857185 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.879231 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:04.950615 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:05.347388 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:05.847467 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.346572 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.846691 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.347421 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.347439 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.847592 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.847473 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:10.346839 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:10.642468 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:10.708302 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.708346 2149446 retry.go:84] will retry after 9.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.847493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.346650 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846502 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:11.932894 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:12.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:12.847370 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.346723 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.846605 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.347318 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.846610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:15.276803 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:15.337116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.337152 2149446 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.347320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:15.847453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.846547 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.347494 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.847450 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.347436 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.846663 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.347534 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.846595 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:20.347375 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:20.550393 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:20.629907 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.629948 2149446 retry.go:84] will retry after 13.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.847352 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.347359 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.847541 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.346619 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.846606 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.347285 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.847257 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.346695 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.847195 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:25.347322 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:25.847511 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.346649 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.847539 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.261998 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:27.328263 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.328301 2149446 retry.go:84] will retry after 13.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.347392 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.847138 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.346635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.547391 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:28.610028 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:28.847445 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.346661 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.847366 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:30.346864 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:30.846648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.846643 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.846656 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.346602 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.846639 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.494891 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:34.556245 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.556282 2149446 retry.go:84] will retry after 24.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.846617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:35.347129 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:35.846681 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:36.346565 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:36.846669 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:37.347286 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:37.846895 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:38.346662 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:38.847012 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:39.347115 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:39.846591 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:40.347475 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:40.735554 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:40.795474 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:40.795515 2149446 retry.go:84] will retry after 21.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:40.846792 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:41.172727 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:41.231783 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:41.347078 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:41.846609 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:42.346631 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:42.847610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:43.346556 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:43.847443 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:44.347395 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:44.847490 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:45.346921 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:45.846697 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:46.346758 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:46.847524 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:47.346683 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:47.846830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:47.846917 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:47.875900 2149446 cri.go:96] found id: ""
	I1222 01:49:47.875920 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.875929 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:47.875935 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:47.875999 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:47.911877 2149446 cri.go:96] found id: ""
	I1222 01:49:47.911899 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.911908 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:47.911914 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:47.911982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:47.941469 2149446 cri.go:96] found id: ""
	I1222 01:49:47.941492 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.941500 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:47.941507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:47.941566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:47.967048 2149446 cri.go:96] found id: ""
	I1222 01:49:47.967088 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.967097 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:47.967104 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:47.967180 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:47.995110 2149446 cri.go:96] found id: ""
	I1222 01:49:47.995135 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.995144 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:47.995150 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:47.995212 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:48.023445 2149446 cri.go:96] found id: ""
	I1222 01:49:48.023513 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.023525 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:48.023533 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:48.023607 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:48.051956 2149446 cri.go:96] found id: ""
	I1222 01:49:48.052023 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.052044 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:48.052063 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:48.052151 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:48.079789 2149446 cri.go:96] found id: ""
	I1222 01:49:48.079817 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.079827 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:48.079836 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:48.079847 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:48.148881 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:48.148917 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:48.166528 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:48.166562 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:48.230410 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:48.221856    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.222556    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224311    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224847    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.226455    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:48.221856    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.222556    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224311    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224847    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.226455    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:48.230487 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:48.230509 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:48.259694 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:48.259729 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:50.788374 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:50.799012 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:50.799080 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:50.824427 2149446 cri.go:96] found id: ""
	I1222 01:49:50.824451 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.824460 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:50.824467 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:50.824526 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:50.857602 2149446 cri.go:96] found id: ""
	I1222 01:49:50.857626 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.857636 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:50.857642 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:50.857700 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:50.886784 2149446 cri.go:96] found id: ""
	I1222 01:49:50.886809 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.886817 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:50.886824 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:50.886881 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:50.913209 2149446 cri.go:96] found id: ""
	I1222 01:49:50.913239 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.913249 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:50.913256 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:50.913362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:50.939349 2149446 cri.go:96] found id: ""
	I1222 01:49:50.939372 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.939381 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:50.939387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:50.939458 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:50.968002 2149446 cri.go:96] found id: ""
	I1222 01:49:50.968068 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.968080 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:50.968092 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:50.968182 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:50.992529 2149446 cri.go:96] found id: ""
	I1222 01:49:50.992596 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.992618 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:50.992638 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:50.992738 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:51.020072 2149446 cri.go:96] found id: ""
	I1222 01:49:51.020143 2149446 logs.go:282] 0 containers: []
	W1222 01:49:51.020165 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:51.020185 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:51.020224 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:51.085768 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:51.085801 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:51.102878 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:51.102907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:51.169244 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:51.160719    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.161459    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163246    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163762    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.165256    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:51.160719    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.161459    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163246    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163762    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.165256    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:51.169268 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:51.169280 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:51.199236 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:51.199275 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:53.726478 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:53.736764 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:53.736833 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:53.761645 2149446 cri.go:96] found id: ""
	I1222 01:49:53.761673 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.761681 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:53.761693 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:53.761752 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:53.790798 2149446 cri.go:96] found id: ""
	I1222 01:49:53.790875 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.790892 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:53.790899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:53.790960 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:53.814874 2149446 cri.go:96] found id: ""
	I1222 01:49:53.814941 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.814956 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:53.814963 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:53.815022 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:53.839840 2149446 cri.go:96] found id: ""
	I1222 01:49:53.839906 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.839930 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:53.839950 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:53.840033 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:53.875780 2149446 cri.go:96] found id: ""
	I1222 01:49:53.875850 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.875870 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:53.875890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:53.875989 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:53.908372 2149446 cri.go:96] found id: ""
	I1222 01:49:53.908451 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.908473 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:53.908519 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:53.908603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:53.937905 2149446 cri.go:96] found id: ""
	I1222 01:49:53.937975 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.937989 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:53.937997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:53.938056 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:53.964017 2149446 cri.go:96] found id: ""
	I1222 01:49:53.964043 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.964052 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:53.964061 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:53.964072 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:53.980713 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:53.980743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:54.051694 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:54.042644    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.043169    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.044865    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.045545    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.047279    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:54.042644    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.043169    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.044865    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.045545    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.047279    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:54.051718 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:54.051732 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:54.081680 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:54.081715 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:54.113831 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:54.113858 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:56.682134 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:56.692686 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:56.692755 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:56.718335 2149446 cri.go:96] found id: ""
	I1222 01:49:56.718360 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.718369 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:56.718375 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:56.718433 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:56.750169 2149446 cri.go:96] found id: ""
	I1222 01:49:56.750199 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.750207 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:56.750214 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:56.750289 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:56.775009 2149446 cri.go:96] found id: ""
	I1222 01:49:56.775033 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.775043 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:56.775049 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:56.775118 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:56.800202 2149446 cri.go:96] found id: ""
	I1222 01:49:56.800226 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.800245 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:56.800253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:56.800317 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:56.825535 2149446 cri.go:96] found id: ""
	I1222 01:49:56.825559 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.825567 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:56.825574 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:56.825633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:56.860355 2149446 cri.go:96] found id: ""
	I1222 01:49:56.860379 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.860388 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:56.860395 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:56.860455 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:56.900413 2149446 cri.go:96] found id: ""
	I1222 01:49:56.900481 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.900502 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:56.900520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:56.900608 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:56.930278 2149446 cri.go:96] found id: ""
	I1222 01:49:56.930343 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.930365 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:56.930381 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:56.930394 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:56.997740 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:56.997776 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:57.016721 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:57.016750 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:57.084545 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:57.076568    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.077143    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.078617    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.079164    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.080737    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:57.076568    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.077143    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.078617    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.079164    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.080737    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:57.084566 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:57.084580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:57.112759 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:57.112793 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:58.183323 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:58.246165 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:58.246322 2149446 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:49:59.087252 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:59.163517 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:59.163599 2149446 retry.go:84] will retry after 41.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:59.646777 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:59.657009 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:59.657077 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:59.682535 2149446 cri.go:96] found id: ""
	I1222 01:49:59.682562 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.682580 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:59.682587 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:59.682685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:59.712155 2149446 cri.go:96] found id: ""
	I1222 01:49:59.712179 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.712194 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:59.712201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:59.712313 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:59.736832 2149446 cri.go:96] found id: ""
	I1222 01:49:59.736859 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.736868 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:59.736884 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:59.736987 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:59.767806 2149446 cri.go:96] found id: ""
	I1222 01:49:59.767874 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.767897 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:59.767917 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:59.767991 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:59.795439 2149446 cri.go:96] found id: ""
	I1222 01:49:59.795504 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.795529 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:59.795549 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:59.795616 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:59.819847 2149446 cri.go:96] found id: ""
	I1222 01:49:59.819882 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.819892 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:59.819899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:59.819966 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:59.850480 2149446 cri.go:96] found id: ""
	I1222 01:49:59.850509 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.850518 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:59.850524 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:59.850592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:59.884120 2149446 cri.go:96] found id: ""
	I1222 01:49:59.884144 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.884153 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:59.884161 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:59.884173 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:59.957806 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:59.957845 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:59.974377 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:59.974406 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:00.101182 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:00.069859    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.070359    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.072368    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.077118    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.078499    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:00.069859    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.070359    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.072368    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.077118    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.078499    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:00.101220 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:00.101234 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:00.179956 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:00.180075 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:02.382663 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:50:02.449495 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:50:02.449536 2149446 retry.go:84] will retry after 44.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:50:02.730117 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:02.740995 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:02.741072 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:02.768772 2149446 cri.go:96] found id: ""
	I1222 01:50:02.768797 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.768805 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:02.768812 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:02.768871 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:02.794287 2149446 cri.go:96] found id: ""
	I1222 01:50:02.794311 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.794319 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:02.794326 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:02.794389 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:02.819466 2149446 cri.go:96] found id: ""
	I1222 01:50:02.819546 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.819568 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:02.819586 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:02.819669 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:02.853024 2149446 cri.go:96] found id: ""
	I1222 01:50:02.853053 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.853062 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:02.853068 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:02.853137 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:02.880476 2149446 cri.go:96] found id: ""
	I1222 01:50:02.880500 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.880509 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:02.880522 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:02.880594 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:02.911828 2149446 cri.go:96] found id: ""
	I1222 01:50:02.911852 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.911861 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:02.911868 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:02.911945 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:02.938488 2149446 cri.go:96] found id: ""
	I1222 01:50:02.938567 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.938589 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:02.938610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:02.938701 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:02.963720 2149446 cri.go:96] found id: ""
	I1222 01:50:02.963747 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.963756 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:02.963765 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:02.963777 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:03.029961 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:03.030002 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:03.047145 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:03.047175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:03.113634 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:03.105215    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.105984    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.107648    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.108298    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.109821    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:03.105215    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.105984    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.107648    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.108298    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.109821    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:03.113655 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:03.113667 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:03.143385 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:03.143421 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:05.679417 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:05.690070 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:05.690144 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:05.715488 2149446 cri.go:96] found id: ""
	I1222 01:50:05.715513 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.715531 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:05.715538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:05.715604 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:05.746380 2149446 cri.go:96] found id: ""
	I1222 01:50:05.746456 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.746478 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:05.746492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:05.746570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:05.770990 2149446 cri.go:96] found id: ""
	I1222 01:50:05.771014 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.771023 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:05.771030 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:05.771092 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:05.796198 2149446 cri.go:96] found id: ""
	I1222 01:50:05.796220 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.796228 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:05.796235 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:05.796293 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:05.823560 2149446 cri.go:96] found id: ""
	I1222 01:50:05.823587 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.823595 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:05.823601 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:05.823660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:05.856315 2149446 cri.go:96] found id: ""
	I1222 01:50:05.856349 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.856359 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:05.856365 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:05.856425 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:05.895348 2149446 cri.go:96] found id: ""
	I1222 01:50:05.895375 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.895383 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:05.895390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:05.895452 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:05.924398 2149446 cri.go:96] found id: ""
	I1222 01:50:05.924466 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.924478 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:05.924487 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:05.924499 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:05.952966 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:05.953003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:05.984569 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:05.984596 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:06.051373 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:06.051410 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:06.069275 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:06.069345 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:06.136767 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:06.127838    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.128596    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.130370    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.131018    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.132875    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:06.127838    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.128596    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.130370    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.131018    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.132875    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:08.637440 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:08.649276 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:08.649375 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:08.676353 2149446 cri.go:96] found id: ""
	I1222 01:50:08.676377 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.676386 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:08.676393 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:08.676457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:08.702672 2149446 cri.go:96] found id: ""
	I1222 01:50:08.702696 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.702705 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:08.702712 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:08.702773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:08.733007 2149446 cri.go:96] found id: ""
	I1222 01:50:08.733029 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.733038 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:08.733044 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:08.733108 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:08.758339 2149446 cri.go:96] found id: ""
	I1222 01:50:08.758363 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.758371 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:08.758378 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:08.758437 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:08.783187 2149446 cri.go:96] found id: ""
	I1222 01:50:08.783210 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.783219 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:08.783225 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:08.783290 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:08.813005 2149446 cri.go:96] found id: ""
	I1222 01:50:08.813028 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.813036 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:08.813042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:08.813102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:08.840050 2149446 cri.go:96] found id: ""
	I1222 01:50:08.840073 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.840081 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:08.840087 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:08.840232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:08.879901 2149446 cri.go:96] found id: ""
	I1222 01:50:08.879976 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.880013 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:08.880039 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:08.880067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:08.955189 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:08.945414    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.946257    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948041    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948826    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.950714    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:08.945414    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.946257    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948041    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948826    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.950714    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:08.955272 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:08.955297 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:08.983942 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:08.983976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:09.023107 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:09.023137 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:09.089046 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:09.089084 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:11.605972 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:11.618258 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:11.618329 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:11.663499 2149446 cri.go:96] found id: ""
	I1222 01:50:11.663523 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.663531 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:11.663538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:11.663603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:11.693496 2149446 cri.go:96] found id: ""
	I1222 01:50:11.693524 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.693533 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:11.693539 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:11.693610 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:11.719147 2149446 cri.go:96] found id: ""
	I1222 01:50:11.719173 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.719183 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:11.719189 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:11.719256 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:11.744820 2149446 cri.go:96] found id: ""
	I1222 01:50:11.744845 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.744854 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:11.744860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:11.744919 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:11.770539 2149446 cri.go:96] found id: ""
	I1222 01:50:11.770566 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.770575 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:11.770582 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:11.770643 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:11.796499 2149446 cri.go:96] found id: ""
	I1222 01:50:11.796522 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.796531 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:11.796538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:11.796596 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:11.821798 2149446 cri.go:96] found id: ""
	I1222 01:50:11.821822 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.821831 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:11.821837 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:11.821901 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:11.855646 2149446 cri.go:96] found id: ""
	I1222 01:50:11.855668 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.855677 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:11.855685 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:11.855697 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:11.897110 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:11.897191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:11.968365 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:11.968400 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:11.985862 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:11.985890 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:12.057003 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:12.047720    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.048930    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.049816    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.050506    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.052441    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:12.047720    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.048930    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.049816    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.050506    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.052441    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:12.057028 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:12.057044 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:14.587717 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:14.601824 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:14.601894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:14.633615 2149446 cri.go:96] found id: ""
	I1222 01:50:14.633639 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.633651 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:14.633667 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:14.633734 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:14.665995 2149446 cri.go:96] found id: ""
	I1222 01:50:14.666024 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.666034 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:14.666040 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:14.666104 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:14.694750 2149446 cri.go:96] found id: ""
	I1222 01:50:14.694832 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.694846 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:14.694854 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:14.694914 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:14.721367 2149446 cri.go:96] found id: ""
	I1222 01:50:14.721391 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.721401 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:14.721407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:14.721465 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:14.746783 2149446 cri.go:96] found id: ""
	I1222 01:50:14.746864 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.746880 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:14.746887 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:14.746951 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:14.776055 2149446 cri.go:96] found id: ""
	I1222 01:50:14.776078 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.776087 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:14.776094 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:14.776154 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:14.802137 2149446 cri.go:96] found id: ""
	I1222 01:50:14.802162 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.802171 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:14.802178 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:14.802236 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:14.827099 2149446 cri.go:96] found id: ""
	I1222 01:50:14.827176 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.827192 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:14.827203 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:14.827215 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:14.843942 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:14.843972 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:14.924047 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:14.915431    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.916077    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918044    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918580    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.920233    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:14.915431    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.916077    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918044    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918580    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.920233    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:14.924081 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:14.924094 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:14.952630 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:14.952666 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:14.981965 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:14.981992 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:17.554296 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:17.564949 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:17.565018 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:17.596428 2149446 cri.go:96] found id: ""
	I1222 01:50:17.596451 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.596465 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:17.596472 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:17.596531 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:17.629782 2149446 cri.go:96] found id: ""
	I1222 01:50:17.629804 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.629812 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:17.629820 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:17.629877 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:17.658278 2149446 cri.go:96] found id: ""
	I1222 01:50:17.658305 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.658313 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:17.658319 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:17.658384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:17.684159 2149446 cri.go:96] found id: ""
	I1222 01:50:17.684227 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.684248 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:17.684268 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:17.684395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:17.712612 2149446 cri.go:96] found id: ""
	I1222 01:50:17.712682 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.712713 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:17.712751 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:17.712835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:17.740669 2149446 cri.go:96] found id: ""
	I1222 01:50:17.740737 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.740758 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:17.740779 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:17.740868 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:17.767931 2149446 cri.go:96] found id: ""
	I1222 01:50:17.767998 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.768021 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:17.768042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:17.768129 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:17.793937 2149446 cri.go:96] found id: ""
	I1222 01:50:17.794002 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.794015 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:17.794025 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:17.794037 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:17.860940 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:17.860976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:17.885026 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:17.885106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:17.949991 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:17.942003    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.942521    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944226    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944614    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.946054    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:17.942003    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.942521    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944226    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944614    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.946054    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:17.950015 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:17.950028 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:17.978296 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:17.978329 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:20.513236 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:20.523539 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:20.523613 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:20.548569 2149446 cri.go:96] found id: ""
	I1222 01:50:20.548592 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.548600 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:20.548607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:20.548670 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:20.573066 2149446 cri.go:96] found id: ""
	I1222 01:50:20.573145 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.573167 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:20.573185 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:20.573276 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:20.604633 2149446 cri.go:96] found id: ""
	I1222 01:50:20.604659 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.604668 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:20.604675 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:20.604735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:20.635610 2149446 cri.go:96] found id: ""
	I1222 01:50:20.635636 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.635645 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:20.635651 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:20.635717 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:20.670374 2149446 cri.go:96] found id: ""
	I1222 01:50:20.670441 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.670455 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:20.670462 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:20.670524 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:20.699800 2149446 cri.go:96] found id: ""
	I1222 01:50:20.699822 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.699830 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:20.699837 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:20.699894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:20.728024 2149446 cri.go:96] found id: ""
	I1222 01:50:20.728048 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.728057 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:20.728063 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:20.728122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:20.758174 2149446 cri.go:96] found id: ""
	I1222 01:50:20.758198 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.758206 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:20.758215 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:20.758227 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:20.823612 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:20.823650 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:20.840051 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:20.840081 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:20.925259 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:20.915370    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.916028    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.918122    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.919738    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.921230    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:20.915370    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.916028    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.918122    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.919738    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.921230    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:20.925294 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:20.925307 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:20.955201 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:20.955234 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:23.483497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:23.494221 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:23.494298 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:23.523189 2149446 cri.go:96] found id: ""
	I1222 01:50:23.523214 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.523223 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:23.523229 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:23.523289 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:23.550633 2149446 cri.go:96] found id: ""
	I1222 01:50:23.550656 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.550666 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:23.550672 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:23.550730 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:23.576289 2149446 cri.go:96] found id: ""
	I1222 01:50:23.576312 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.576328 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:23.576335 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:23.576395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:23.611281 2149446 cri.go:96] found id: ""
	I1222 01:50:23.611304 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.611312 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:23.611320 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:23.611381 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:23.640498 2149446 cri.go:96] found id: ""
	I1222 01:50:23.640520 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.640529 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:23.640535 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:23.640595 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:23.674625 2149446 cri.go:96] found id: ""
	I1222 01:50:23.674649 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.674658 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:23.674665 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:23.674723 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:23.699746 2149446 cri.go:96] found id: ""
	I1222 01:50:23.699771 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.699780 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:23.699786 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:23.699849 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:23.724314 2149446 cri.go:96] found id: ""
	I1222 01:50:23.724345 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.724353 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:23.724369 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:23.724381 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:23.740816 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:23.740846 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:23.813467 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:23.804843    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.805658    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807401    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807908    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.809520    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:23.804843    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.805658    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807401    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807908    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.809520    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:23.813488 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:23.813501 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:23.844492 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:23.844529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:23.883925 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:23.884003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:26.466430 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:26.480754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:26.480834 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:26.505742 2149446 cri.go:96] found id: ""
	I1222 01:50:26.505825 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.505856 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:26.505876 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:26.505986 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:26.532705 2149446 cri.go:96] found id: ""
	I1222 01:50:26.532781 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.532816 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:26.532839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:26.532932 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:26.557032 2149446 cri.go:96] found id: ""
	I1222 01:50:26.557116 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.557139 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:26.557170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:26.557247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:26.583725 2149446 cri.go:96] found id: ""
	I1222 01:50:26.583812 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.583835 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:26.583853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:26.583962 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:26.617024 2149446 cri.go:96] found id: ""
	I1222 01:50:26.617089 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.617121 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:26.617140 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:26.617246 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:26.646386 2149446 cri.go:96] found id: ""
	I1222 01:50:26.646464 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.646488 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:26.646507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:26.646612 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:26.678246 2149446 cri.go:96] found id: ""
	I1222 01:50:26.678269 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.678278 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:26.678285 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:26.678347 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:26.703982 2149446 cri.go:96] found id: ""
	I1222 01:50:26.704008 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.704017 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:26.704026 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:26.704042 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:26.720933 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:26.720994 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:26.786938 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:26.778772    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.779363    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781037    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781428    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.782886    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:26.778772    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.779363    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781037    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781428    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.782886    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:26.786961 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:26.786974 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:26.815331 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:26.815367 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:26.845602 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:26.845630 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:29.422527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:29.433711 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:29.433793 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:29.465497 2149446 cri.go:96] found id: ""
	I1222 01:50:29.465548 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.465558 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:29.465564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:29.465634 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:29.490894 2149446 cri.go:96] found id: ""
	I1222 01:50:29.490919 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.490927 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:29.490938 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:29.490998 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:29.520626 2149446 cri.go:96] found id: ""
	I1222 01:50:29.520661 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.520670 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:29.520677 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:29.520747 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:29.547848 2149446 cri.go:96] found id: ""
	I1222 01:50:29.547885 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.547893 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:29.547900 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:29.547982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:29.577023 2149446 cri.go:96] found id: ""
	I1222 01:50:29.577048 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.577057 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:29.577065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:29.577127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:29.611778 2149446 cri.go:96] found id: ""
	I1222 01:50:29.611814 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.611823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:29.611830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:29.611903 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:29.643842 2149446 cri.go:96] found id: ""
	I1222 01:50:29.643918 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.643941 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:29.643958 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:29.644044 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:29.674585 2149446 cri.go:96] found id: ""
	I1222 01:50:29.674658 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.674680 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:29.674704 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:29.674743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:29.703700 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:29.703743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:29.735551 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:29.735577 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:29.800624 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:29.800662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:29.817316 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:29.817364 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:29.907057 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:29.895787    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.896652    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898306    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898827    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.902956    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:29.895787    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.896652    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898306    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898827    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.902956    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:32.407335 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:32.417713 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:32.417782 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:32.443620 2149446 cri.go:96] found id: ""
	I1222 01:50:32.443648 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.443656 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:32.443662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:32.443722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:32.473803 2149446 cri.go:96] found id: ""
	I1222 01:50:32.473831 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.473840 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:32.473847 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:32.473908 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:32.503900 2149446 cri.go:96] found id: ""
	I1222 01:50:32.503928 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.503936 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:32.503943 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:32.504007 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:32.533317 2149446 cri.go:96] found id: ""
	I1222 01:50:32.533427 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.533436 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:32.533442 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:32.533503 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:32.558269 2149446 cri.go:96] found id: ""
	I1222 01:50:32.558292 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.558300 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:32.558307 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:32.558366 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:32.583954 2149446 cri.go:96] found id: ""
	I1222 01:50:32.583976 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.583984 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:32.583990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:32.584048 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:32.619938 2149446 cri.go:96] found id: ""
	I1222 01:50:32.619959 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.619967 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:32.619974 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:32.620032 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:32.646855 2149446 cri.go:96] found id: ""
	I1222 01:50:32.646876 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.646883 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:32.646892 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:32.646903 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:32.681292 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:32.681331 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:32.709454 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:32.709486 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:32.774478 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:32.774512 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:32.790947 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:32.790977 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:32.865716 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:32.856728    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.857643    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.858424    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.859539    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.860242    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:32.856728    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.857643    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.858424    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.859539    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.860242    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:35.366405 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:35.377118 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:35.377211 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:35.405034 2149446 cri.go:96] found id: ""
	I1222 01:50:35.405073 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.405083 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:35.405089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:35.405172 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:35.429958 2149446 cri.go:96] found id: ""
	I1222 01:50:35.429985 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.429994 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:35.430002 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:35.430063 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:35.455069 2149446 cri.go:96] found id: ""
	I1222 01:50:35.455094 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.455103 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:35.455112 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:35.455173 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:35.482985 2149446 cri.go:96] found id: ""
	I1222 01:50:35.483053 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.483075 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:35.483089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:35.483165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:35.506805 2149446 cri.go:96] found id: ""
	I1222 01:50:35.506874 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.506894 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:35.506909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:35.506983 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:35.532897 2149446 cri.go:96] found id: ""
	I1222 01:50:35.532925 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.532934 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:35.532941 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:35.533002 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:35.564747 2149446 cri.go:96] found id: ""
	I1222 01:50:35.564774 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.564783 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:35.564789 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:35.564858 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:35.589912 2149446 cri.go:96] found id: ""
	I1222 01:50:35.589939 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.589948 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:35.589956 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:35.589969 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:35.665301 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:35.665441 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:35.682348 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:35.682377 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:35.749489 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:35.741663    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.742347    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.743818    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.744264    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.745698    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:35.741663    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.742347    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.743818    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.744264    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.745698    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:35.749511 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:35.749524 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:35.776750 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:35.776782 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:38.306113 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:38.316656 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:38.316729 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:38.343191 2149446 cri.go:96] found id: ""
	I1222 01:50:38.343229 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.343238 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:38.343245 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:38.343311 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:38.371584 2149446 cri.go:96] found id: ""
	I1222 01:50:38.371609 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.371618 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:38.371625 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:38.371686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:38.399378 2149446 cri.go:96] found id: ""
	I1222 01:50:38.399401 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.399410 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:38.399416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:38.399476 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:38.424466 2149446 cri.go:96] found id: ""
	I1222 01:50:38.424492 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.424500 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:38.424513 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:38.424575 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:38.449419 2149446 cri.go:96] found id: ""
	I1222 01:50:38.449452 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.449461 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:38.449468 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:38.449533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:38.475321 2149446 cri.go:96] found id: ""
	I1222 01:50:38.475347 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.475356 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:38.475363 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:38.475423 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:38.499428 2149446 cri.go:96] found id: ""
	I1222 01:50:38.499458 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.499469 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:38.499476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:38.499541 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:38.524738 2149446 cri.go:96] found id: ""
	I1222 01:50:38.524766 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.524780 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:38.524789 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:38.524800 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:38.557984 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:38.558022 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:38.589548 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:38.589577 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:38.671416 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:38.671453 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:38.688135 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:38.688164 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:38.751396 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:38.743333    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.744008    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.745568    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.746050    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.747565    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:38.743333    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.744008    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.745568    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.746050    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.747565    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:40.899829 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:50:40.960744 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:50:40.960855 2149446 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:50:41.252211 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:41.263048 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:41.263123 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:41.296135 2149446 cri.go:96] found id: ""
	I1222 01:50:41.296159 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.296167 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:41.296174 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:41.296232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:41.322198 2149446 cri.go:96] found id: ""
	I1222 01:50:41.322277 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.322291 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:41.322299 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:41.322360 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:41.348725 2149446 cri.go:96] found id: ""
	I1222 01:50:41.348750 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.348760 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:41.348766 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:41.348825 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:41.374926 2149446 cri.go:96] found id: ""
	I1222 01:50:41.375005 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.375021 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:41.375029 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:41.375096 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:41.406446 2149446 cri.go:96] found id: ""
	I1222 01:50:41.406537 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.406556 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:41.406566 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:41.406648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:41.432212 2149446 cri.go:96] found id: ""
	I1222 01:50:41.432239 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.432247 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:41.432254 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:41.432321 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:41.463730 2149446 cri.go:96] found id: ""
	I1222 01:50:41.463755 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.463765 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:41.463771 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:41.463833 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:41.490157 2149446 cri.go:96] found id: ""
	I1222 01:50:41.490182 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.490193 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:41.490201 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:41.490213 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:41.558943 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:41.558981 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:41.576405 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:41.576435 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:41.666063 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:41.657908    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.658547    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660067    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660654    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.662156    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:41.657908    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.658547    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660067    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660654    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.662156    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:41.666083 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:41.666095 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:41.696072 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:41.696105 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:44.226469 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:44.236983 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:44.237055 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:44.261867 2149446 cri.go:96] found id: ""
	I1222 01:50:44.261892 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.261900 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:44.261907 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:44.261971 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:44.287438 2149446 cri.go:96] found id: ""
	I1222 01:50:44.287462 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.287471 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:44.287478 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:44.287536 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:44.313755 2149446 cri.go:96] found id: ""
	I1222 01:50:44.313779 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.313788 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:44.313794 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:44.313861 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:44.347563 2149446 cri.go:96] found id: ""
	I1222 01:50:44.347589 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.347597 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:44.347604 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:44.347688 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:44.377289 2149446 cri.go:96] found id: ""
	I1222 01:50:44.377313 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.377354 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:44.377362 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:44.377443 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:44.403181 2149446 cri.go:96] found id: ""
	I1222 01:50:44.403207 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.403215 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:44.403222 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:44.403280 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:44.432425 2149446 cri.go:96] found id: ""
	I1222 01:50:44.432449 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.432457 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:44.432465 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:44.432529 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:44.458001 2149446 cri.go:96] found id: ""
	I1222 01:50:44.458030 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.458039 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:44.458048 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:44.458089 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:44.485816 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:44.485844 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:44.552935 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:44.552970 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:44.569824 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:44.569852 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:44.652688 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:44.644560    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.645403    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647098    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647385    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.648885    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:44.644560    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.645403    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647098    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647385    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.648885    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:44.652760 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:44.652785 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:46.992640 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:50:47.054063 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:50:47.054189 2149446 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:50:47.057341 2149446 out.go:179] * Enabled addons: 
	I1222 01:50:47.060205 2149446 addons.go:530] duration metric: took 1m59.289939725s for enable addons: enabled=[]
	I1222 01:50:47.182147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:47.192818 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:47.192903 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:47.224024 2149446 cri.go:96] found id: ""
	I1222 01:50:47.224046 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.224055 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:47.224061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:47.224122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:47.250330 2149446 cri.go:96] found id: ""
	I1222 01:50:47.250357 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.250365 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:47.250372 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:47.250431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:47.275271 2149446 cri.go:96] found id: ""
	I1222 01:50:47.275294 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.275302 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:47.275309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:47.275370 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:47.300369 2149446 cri.go:96] found id: ""
	I1222 01:50:47.300394 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.300403 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:47.300409 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:47.300468 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:47.325147 2149446 cri.go:96] found id: ""
	I1222 01:50:47.325213 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.325234 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:47.325253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:47.325362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:47.353926 2149446 cri.go:96] found id: ""
	I1222 01:50:47.353953 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.353961 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:47.353968 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:47.354028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:47.378935 2149446 cri.go:96] found id: ""
	I1222 01:50:47.378961 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.378970 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:47.378977 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:47.379040 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:47.404019 2149446 cri.go:96] found id: ""
	I1222 01:50:47.404042 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.404051 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:47.404061 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:47.404078 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:47.473731 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:47.473766 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:47.490597 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:47.490625 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:47.557658 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:47.550047    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.550549    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552046    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552472    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.553897    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:47.550047    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.550549    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552046    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552472    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.553897    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:47.557681 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:47.557694 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:47.586853 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:47.586888 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:50.125437 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:50.138125 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:50.138195 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:50.166675 2149446 cri.go:96] found id: ""
	I1222 01:50:50.166698 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.166706 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:50.166712 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:50.166775 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:50.195252 2149446 cri.go:96] found id: ""
	I1222 01:50:50.195277 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.195285 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:50.195293 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:50.195354 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:50.221357 2149446 cri.go:96] found id: ""
	I1222 01:50:50.221390 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.221406 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:50.221413 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:50.221504 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:50.247114 2149446 cri.go:96] found id: ""
	I1222 01:50:50.247137 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.247145 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:50.247163 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:50.247221 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:50.272457 2149446 cri.go:96] found id: ""
	I1222 01:50:50.272482 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.272491 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:50.272497 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:50.272558 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:50.302435 2149446 cri.go:96] found id: ""
	I1222 01:50:50.302464 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.302473 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:50.302479 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:50.302554 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:50.325940 2149446 cri.go:96] found id: ""
	I1222 01:50:50.326010 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.326025 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:50.326033 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:50.326095 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:50.355131 2149446 cri.go:96] found id: ""
	I1222 01:50:50.355156 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.355167 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:50.355176 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:50.355219 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:50.421016 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:50.421050 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:50.438999 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:50.439026 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:50.505148 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:50.497150    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.497999    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499544    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499840    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.501315    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:50.497150    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.497999    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499544    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499840    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.501315    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:50.505167 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:50.505178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:50.533117 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:50.533151 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:53.063902 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:53.074689 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:53.074763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:53.105072 2149446 cri.go:96] found id: ""
	I1222 01:50:53.105098 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.105108 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:53.105114 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:53.105178 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:53.135898 2149446 cri.go:96] found id: ""
	I1222 01:50:53.135924 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.135933 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:53.135939 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:53.135999 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:53.167436 2149446 cri.go:96] found id: ""
	I1222 01:50:53.167461 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.167469 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:53.167476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:53.167559 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:53.194408 2149446 cri.go:96] found id: ""
	I1222 01:50:53.194523 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.194547 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:53.194566 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:53.194648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:53.219170 2149446 cri.go:96] found id: ""
	I1222 01:50:53.219232 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.219246 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:53.219259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:53.219319 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:53.245034 2149446 cri.go:96] found id: ""
	I1222 01:50:53.245059 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.245067 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:53.245074 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:53.245131 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:53.271853 2149446 cri.go:96] found id: ""
	I1222 01:50:53.271884 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.271893 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:53.271899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:53.271970 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:53.296192 2149446 cri.go:96] found id: ""
	I1222 01:50:53.296225 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.296234 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:53.296243 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:53.296259 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:53.363334 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:53.363372 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:53.381487 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:53.381517 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:53.448549 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:53.440319    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.440874    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.442515    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.443004    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.444491    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:53.440319    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.440874    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.442515    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.443004    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.444491    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:53.448571 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:53.448583 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:53.481685 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:53.481727 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:56.012872 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:56.025767 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:56.025845 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:56.053844 2149446 cri.go:96] found id: ""
	I1222 01:50:56.053873 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.053882 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:56.053890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:56.053961 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:56.082849 2149446 cri.go:96] found id: ""
	I1222 01:50:56.082873 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.082882 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:56.082888 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:56.082964 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:56.116254 2149446 cri.go:96] found id: ""
	I1222 01:50:56.116336 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.116359 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:56.116379 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:56.116489 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:56.157611 2149446 cri.go:96] found id: ""
	I1222 01:50:56.157636 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.157644 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:56.157662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:56.157721 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:56.183024 2149446 cri.go:96] found id: ""
	I1222 01:50:56.183102 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.183123 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:56.183137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:56.183214 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:56.211789 2149446 cri.go:96] found id: ""
	I1222 01:50:56.211814 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.211823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:56.211830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:56.211888 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:56.238212 2149446 cri.go:96] found id: ""
	I1222 01:50:56.238286 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.238309 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:56.238332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:56.238421 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:56.269053 2149446 cri.go:96] found id: ""
	I1222 01:50:56.269079 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.269088 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:56.269097 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:56.269110 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:56.339489 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:56.339527 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:56.356488 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:56.356627 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:56.423568 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:56.415753    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.416507    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418088    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418397    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.419880    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:56.415753    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.416507    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418088    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418397    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.419880    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:56.423639 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:56.423664 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:56.452827 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:56.452859 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:58.984493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:58.994909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:58.994982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:59.022329 2149446 cri.go:96] found id: ""
	I1222 01:50:59.022354 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.022363 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:59.022369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:59.022429 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:59.047878 2149446 cri.go:96] found id: ""
	I1222 01:50:59.047901 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.047909 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:59.047915 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:59.047975 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:59.072596 2149446 cri.go:96] found id: ""
	I1222 01:50:59.072622 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.072631 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:59.072637 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:59.072696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:59.104362 2149446 cri.go:96] found id: ""
	I1222 01:50:59.104391 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.104400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:59.104406 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:59.104470 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:59.143935 2149446 cri.go:96] found id: ""
	I1222 01:50:59.143960 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.143970 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:59.143977 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:59.144037 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:59.172279 2149446 cri.go:96] found id: ""
	I1222 01:50:59.172305 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.172314 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:59.172321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:59.172390 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:59.197781 2149446 cri.go:96] found id: ""
	I1222 01:50:59.197804 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.197813 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:59.197819 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:59.197879 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:59.223417 2149446 cri.go:96] found id: ""
	I1222 01:50:59.223441 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.223450 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:59.223459 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:59.223470 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:59.289154 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:59.289191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:59.306341 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:59.306372 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:59.371286 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:59.363578    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.364246    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.365700    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.366151    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.367566    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:59.363578    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.364246    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.365700    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.366151    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.367566    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:59.371309 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:59.371321 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:59.399863 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:59.399898 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:01.927120 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:01.939501 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:01.939576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:01.967293 2149446 cri.go:96] found id: ""
	I1222 01:51:01.967326 2149446 logs.go:282] 0 containers: []
	W1222 01:51:01.967336 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:01.967343 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:01.967430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:02.010081 2149446 cri.go:96] found id: ""
	I1222 01:51:02.010124 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.010134 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:02.010141 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:02.010231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:02.037304 2149446 cri.go:96] found id: ""
	I1222 01:51:02.037354 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.037363 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:02.037369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:02.037430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:02.065893 2149446 cri.go:96] found id: ""
	I1222 01:51:02.065917 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.065926 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:02.065932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:02.065992 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:02.098337 2149446 cri.go:96] found id: ""
	I1222 01:51:02.098366 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.098375 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:02.098383 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:02.098448 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:02.137900 2149446 cri.go:96] found id: ""
	I1222 01:51:02.137929 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.137939 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:02.137946 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:02.138104 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:02.170824 2149446 cri.go:96] found id: ""
	I1222 01:51:02.170852 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.170861 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:02.170868 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:02.170947 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:02.206007 2149446 cri.go:96] found id: ""
	I1222 01:51:02.206032 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.206041 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:02.206050 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:02.206062 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:02.273536 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:02.273574 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:02.290756 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:02.290788 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:02.358784 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:02.350805    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.351393    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.352892    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.353378    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.355003    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:02.350805    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.351393    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.352892    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.353378    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.355003    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:02.358806 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:02.358821 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:02.386732 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:02.386772 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:04.916635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:04.927272 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:04.927352 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:04.952318 2149446 cri.go:96] found id: ""
	I1222 01:51:04.952350 2149446 logs.go:282] 0 containers: []
	W1222 01:51:04.952358 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:04.952364 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:04.952424 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:04.980287 2149446 cri.go:96] found id: ""
	I1222 01:51:04.980309 2149446 logs.go:282] 0 containers: []
	W1222 01:51:04.980317 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:04.980338 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:04.980398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:05.011897 2149446 cri.go:96] found id: ""
	I1222 01:51:05.011975 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.011998 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:05.012019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:05.012114 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:05.039439 2149446 cri.go:96] found id: ""
	I1222 01:51:05.039464 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.039473 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:05.039480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:05.039562 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:05.065276 2149446 cri.go:96] found id: ""
	I1222 01:51:05.065309 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.065318 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:05.065352 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:05.065421 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:05.104171 2149446 cri.go:96] found id: ""
	I1222 01:51:05.104250 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.104272 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:05.104293 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:05.104397 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:05.140837 2149446 cri.go:96] found id: ""
	I1222 01:51:05.140872 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.140882 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:05.140889 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:05.140962 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:05.174413 2149446 cri.go:96] found id: ""
	I1222 01:51:05.174439 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.174448 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:05.174457 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:05.174469 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:05.241273 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:05.232831    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.233309    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235135    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235674    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.237437    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:05.232831    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.233309    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235135    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235674    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.237437    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:05.241367 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:05.241395 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:05.269917 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:05.269947 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:05.298262 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:05.298289 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:05.366093 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:05.366127 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:07.884446 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:07.897419 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:07.897506 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:07.923443 2149446 cri.go:96] found id: ""
	I1222 01:51:07.923466 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.923474 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:07.923481 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:07.923538 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:07.949573 2149446 cri.go:96] found id: ""
	I1222 01:51:07.949595 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.949604 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:07.949610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:07.949672 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:07.974625 2149446 cri.go:96] found id: ""
	I1222 01:51:07.974648 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.974657 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:07.974664 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:07.974722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:08.000701 2149446 cri.go:96] found id: ""
	I1222 01:51:08.000729 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.000738 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:08.000746 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:08.000823 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:08.030073 2149446 cri.go:96] found id: ""
	I1222 01:51:08.030098 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.030106 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:08.030113 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:08.030184 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:08.056629 2149446 cri.go:96] found id: ""
	I1222 01:51:08.056665 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.056675 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:08.056682 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:08.056742 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:08.086264 2149446 cri.go:96] found id: ""
	I1222 01:51:08.086299 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.086308 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:08.086320 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:08.086393 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:08.116679 2149446 cri.go:96] found id: ""
	I1222 01:51:08.116709 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.116719 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:08.116728 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:08.116748 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:08.157494 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:08.157521 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:08.233683 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:08.233771 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:08.250905 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:08.250934 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:08.316451 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:08.307255    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.308718    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.309622    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311377    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311630    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:08.307255    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.308718    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.309622    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311377    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311630    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:08.316472 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:08.316484 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:10.846347 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:10.861517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:10.861699 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:10.899351 2149446 cri.go:96] found id: ""
	I1222 01:51:10.899458 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.899504 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:10.899543 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:10.899659 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:10.929750 2149446 cri.go:96] found id: ""
	I1222 01:51:10.929818 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.929839 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:10.929851 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:10.929929 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:10.960575 2149446 cri.go:96] found id: ""
	I1222 01:51:10.960610 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.960619 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:10.960626 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:10.960698 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:10.985907 2149446 cri.go:96] found id: ""
	I1222 01:51:10.985932 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.985940 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:10.985947 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:10.986010 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:11.014799 2149446 cri.go:96] found id: ""
	I1222 01:51:11.014878 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.014893 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:11.014901 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:11.014965 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:11.040928 2149446 cri.go:96] found id: ""
	I1222 01:51:11.040960 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.040970 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:11.040976 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:11.041078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:11.067246 2149446 cri.go:96] found id: ""
	I1222 01:51:11.067327 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.067351 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:11.067365 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:11.067442 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:11.095605 2149446 cri.go:96] found id: ""
	I1222 01:51:11.095631 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.095640 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:11.095650 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:11.095662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:11.169987 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:11.170076 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:11.186770 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:11.186800 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:11.248524 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:11.240323    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.241076    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242336    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242961    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.244559    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:11.240323    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.241076    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242336    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242961    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.244559    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:11.248546 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:11.248559 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:11.276178 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:11.276215 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:13.807549 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:13.818390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:13.818463 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:13.844700 2149446 cri.go:96] found id: ""
	I1222 01:51:13.844722 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.844730 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:13.844736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:13.844795 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:13.875626 2149446 cri.go:96] found id: ""
	I1222 01:51:13.875648 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.875656 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:13.875662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:13.875727 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:13.904646 2149446 cri.go:96] found id: ""
	I1222 01:51:13.904667 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.904676 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:13.904682 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:13.904740 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:13.933165 2149446 cri.go:96] found id: ""
	I1222 01:51:13.933241 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.933263 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:13.933282 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:13.933402 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:13.961468 2149446 cri.go:96] found id: ""
	I1222 01:51:13.961492 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.961501 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:13.961507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:13.961576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:13.985666 2149446 cri.go:96] found id: ""
	I1222 01:51:13.985689 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.985697 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:13.985704 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:13.985760 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:14.018377 2149446 cri.go:96] found id: ""
	I1222 01:51:14.018402 2149446 logs.go:282] 0 containers: []
	W1222 01:51:14.018411 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:14.018417 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:14.018484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:14.044854 2149446 cri.go:96] found id: ""
	I1222 01:51:14.044879 2149446 logs.go:282] 0 containers: []
	W1222 01:51:14.044888 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:14.044897 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:14.044933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:14.110126 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:14.110206 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:14.128030 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:14.128108 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:14.196957 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:14.188697    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.189523    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191343    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191689    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.193153    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:14.188697    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.189523    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191343    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191689    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.193153    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:14.197029 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:14.197056 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:14.226166 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:14.226203 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:16.756369 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:16.767083 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:16.767155 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:16.793678 2149446 cri.go:96] found id: ""
	I1222 01:51:16.793704 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.793712 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:16.793719 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:16.793779 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:16.819158 2149446 cri.go:96] found id: ""
	I1222 01:51:16.819183 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.819192 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:16.819198 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:16.819260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:16.848081 2149446 cri.go:96] found id: ""
	I1222 01:51:16.848103 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.848112 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:16.848119 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:16.848178 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:16.889132 2149446 cri.go:96] found id: ""
	I1222 01:51:16.889210 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.889233 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:16.889253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:16.889357 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:16.922581 2149446 cri.go:96] found id: ""
	I1222 01:51:16.922656 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.922677 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:16.922697 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:16.922788 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:16.947759 2149446 cri.go:96] found id: ""
	I1222 01:51:16.947833 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.947855 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:16.947876 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:16.947959 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:16.973072 2149446 cri.go:96] found id: ""
	I1222 01:51:16.973106 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.973115 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:16.973138 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:16.973219 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:17.003269 2149446 cri.go:96] found id: ""
	I1222 01:51:17.003303 2149446 logs.go:282] 0 containers: []
	W1222 01:51:17.003328 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:17.003339 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:17.003371 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:17.070360 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:17.070398 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:17.087501 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:17.087530 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:17.166797 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:17.158419    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.159432    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161029    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161376    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.162807    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:17.158419    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.159432    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161029    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161376    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.162807    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:17.166818 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:17.166830 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:17.196986 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:17.197018 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:19.726818 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:19.737430 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:19.737505 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:19.761268 2149446 cri.go:96] found id: ""
	I1222 01:51:19.761291 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.761300 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:19.761306 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:19.761388 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:19.787563 2149446 cri.go:96] found id: ""
	I1222 01:51:19.787587 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.787596 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:19.787602 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:19.787665 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:19.816441 2149446 cri.go:96] found id: ""
	I1222 01:51:19.816468 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.816477 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:19.816483 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:19.816540 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:19.851780 2149446 cri.go:96] found id: ""
	I1222 01:51:19.851812 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.851821 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:19.851827 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:19.851895 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:19.888751 2149446 cri.go:96] found id: ""
	I1222 01:51:19.888785 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.888794 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:19.888800 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:19.888866 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:19.923050 2149446 cri.go:96] found id: ""
	I1222 01:51:19.923078 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.923087 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:19.923093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:19.923151 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:19.948402 2149446 cri.go:96] found id: ""
	I1222 01:51:19.948435 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.948444 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:19.948451 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:19.948520 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:19.973868 2149446 cri.go:96] found id: ""
	I1222 01:51:19.973894 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.973903 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:19.973912 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:19.973942 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:20.041369 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:20.032095    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.032873    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.034426    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.035065    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.036800    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:20.032095    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.032873    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.034426    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.035065    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.036800    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:20.041391 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:20.041404 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:20.070678 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:20.070728 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:20.112858 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:20.112886 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:20.189261 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:20.189300 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:22.708863 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:22.719695 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:22.719790 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:22.746691 2149446 cri.go:96] found id: ""
	I1222 01:51:22.746715 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.746724 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:22.746731 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:22.746790 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:22.775484 2149446 cri.go:96] found id: ""
	I1222 01:51:22.775511 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.775521 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:22.775527 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:22.775592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:22.801739 2149446 cri.go:96] found id: ""
	I1222 01:51:22.801771 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.801781 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:22.801787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:22.801855 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:22.826408 2149446 cri.go:96] found id: ""
	I1222 01:51:22.826431 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.826439 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:22.826446 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:22.826510 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:22.857976 2149446 cri.go:96] found id: ""
	I1222 01:51:22.858000 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.858009 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:22.858016 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:22.858076 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:22.895329 2149446 cri.go:96] found id: ""
	I1222 01:51:22.895357 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.895365 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:22.895372 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:22.895431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:22.924900 2149446 cri.go:96] found id: ""
	I1222 01:51:22.924923 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.924932 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:22.924938 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:22.924997 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:22.949093 2149446 cri.go:96] found id: ""
	I1222 01:51:22.949166 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.949177 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:22.949214 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:22.949228 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:22.978825 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:22.978862 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:23.008996 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:23.009026 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:23.075469 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:23.075506 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:23.092176 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:23.092253 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:23.168462 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:23.160726    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.161296    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.162792    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.163269    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.164704    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:23.160726    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.161296    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.162792    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.163269    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.164704    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:25.669110 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:25.679471 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:25.679544 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:25.707773 2149446 cri.go:96] found id: ""
	I1222 01:51:25.707798 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.707807 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:25.707814 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:25.707872 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:25.734428 2149446 cri.go:96] found id: ""
	I1222 01:51:25.734452 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.734460 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:25.734480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:25.734536 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:25.764253 2149446 cri.go:96] found id: ""
	I1222 01:51:25.764349 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.764381 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:25.764416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:25.764515 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:25.791966 2149446 cri.go:96] found id: ""
	I1222 01:51:25.792042 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.792076 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:25.792100 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:25.792193 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:25.820057 2149446 cri.go:96] found id: ""
	I1222 01:51:25.820132 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.820167 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:25.820191 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:25.820279 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:25.854439 2149446 cri.go:96] found id: ""
	I1222 01:51:25.854514 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.854537 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:25.854556 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:25.854661 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:25.887808 2149446 cri.go:96] found id: ""
	I1222 01:51:25.887884 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.887905 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:25.887919 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:25.887996 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:25.922356 2149446 cri.go:96] found id: ""
	I1222 01:51:25.922380 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.922389 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:25.922398 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:25.922443 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:25.987720 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:25.987760 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:26.005813 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:26.005849 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:26.075026 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:26.066017    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.066714    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.068425    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.069221    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.070982    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:26.066017    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.066714    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.068425    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.069221    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.070982    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:26.075049 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:26.075061 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:26.105526 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:26.105564 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:28.644504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:28.655061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:28.655129 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:28.679454 2149446 cri.go:96] found id: ""
	I1222 01:51:28.679477 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.679486 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:28.679492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:28.679551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:28.708993 2149446 cri.go:96] found id: ""
	I1222 01:51:28.709018 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.709027 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:28.709033 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:28.709095 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:28.733612 2149446 cri.go:96] found id: ""
	I1222 01:51:28.733637 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.733646 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:28.733652 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:28.733711 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:28.762256 2149446 cri.go:96] found id: ""
	I1222 01:51:28.762285 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.762294 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:28.762300 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:28.762359 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:28.788169 2149446 cri.go:96] found id: ""
	I1222 01:51:28.788193 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.788201 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:28.788207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:28.788266 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:28.816745 2149446 cri.go:96] found id: ""
	I1222 01:51:28.816771 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.816780 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:28.816787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:28.816852 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:28.845364 2149446 cri.go:96] found id: ""
	I1222 01:51:28.845392 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.845401 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:28.845407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:28.845469 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:28.881787 2149446 cri.go:96] found id: ""
	I1222 01:51:28.881818 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.881827 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:28.881836 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:28.881855 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:28.914780 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:28.914845 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:28.954392 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:28.954419 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:29.021327 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:29.021365 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:29.037773 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:29.037801 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:29.100009 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:29.091767    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.092439    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.093858    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.094561    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.096086    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:29.091767    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.092439    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.093858    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.094561    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.096086    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:31.600320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:31.612534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:31.612605 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:31.639353 2149446 cri.go:96] found id: ""
	I1222 01:51:31.639376 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.639384 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:31.639391 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:31.639449 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:31.664903 2149446 cri.go:96] found id: ""
	I1222 01:51:31.664927 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.664936 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:31.664942 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:31.665003 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:31.694601 2149446 cri.go:96] found id: ""
	I1222 01:51:31.694626 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.694635 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:31.694642 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:31.694700 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:31.719433 2149446 cri.go:96] found id: ""
	I1222 01:51:31.719476 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.719485 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:31.719493 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:31.719573 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:31.744350 2149446 cri.go:96] found id: ""
	I1222 01:51:31.744379 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.744388 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:31.744395 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:31.744457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:31.768144 2149446 cri.go:96] found id: ""
	I1222 01:51:31.768171 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.768180 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:31.768187 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:31.768246 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:31.793170 2149446 cri.go:96] found id: ""
	I1222 01:51:31.793195 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.793204 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:31.793211 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:31.793269 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:31.817469 2149446 cri.go:96] found id: ""
	I1222 01:51:31.817495 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.817510 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:31.817520 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:31.817531 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:31.833979 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:31.834009 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:31.921313 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:31.912648    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.913191    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.914772    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.915315    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.916842    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:31.912648    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.913191    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.914772    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.915315    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.916842    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:31.921411 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:31.921424 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:31.950732 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:31.950766 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:31.979280 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:31.979390 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:34.551921 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:34.562702 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:34.562773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:34.587409 2149446 cri.go:96] found id: ""
	I1222 01:51:34.587437 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.587446 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:34.587453 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:34.587512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:34.615168 2149446 cri.go:96] found id: ""
	I1222 01:51:34.615194 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.615203 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:34.615210 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:34.615270 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:34.640950 2149446 cri.go:96] found id: ""
	I1222 01:51:34.640975 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.640983 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:34.640990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:34.641070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:34.666882 2149446 cri.go:96] found id: ""
	I1222 01:51:34.666908 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.666917 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:34.666924 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:34.667003 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:34.691815 2149446 cri.go:96] found id: ""
	I1222 01:51:34.691841 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.691850 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:34.691856 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:34.691935 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:34.716718 2149446 cri.go:96] found id: ""
	I1222 01:51:34.716744 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.716753 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:34.716760 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:34.716819 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:34.741533 2149446 cri.go:96] found id: ""
	I1222 01:51:34.741557 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.741565 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:34.741572 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:34.741638 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:34.766353 2149446 cri.go:96] found id: ""
	I1222 01:51:34.766378 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.766387 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:34.766398 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:34.766410 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:34.831354 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:34.831392 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:34.849488 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:34.849517 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:34.936942 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:34.926834    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.927637    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929290    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929888    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.931562    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:34.926834    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.927637    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929290    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929888    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.931562    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:34.936964 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:34.936976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:34.966069 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:34.966106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:37.498453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:37.509387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:37.509460 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:37.536333 2149446 cri.go:96] found id: ""
	I1222 01:51:37.536418 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.536440 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:37.536458 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:37.536564 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:37.561738 2149446 cri.go:96] found id: ""
	I1222 01:51:37.561761 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.561770 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:37.561777 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:37.561835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:37.589068 2149446 cri.go:96] found id: ""
	I1222 01:51:37.589103 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.589112 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:37.589118 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:37.589187 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:37.613720 2149446 cri.go:96] found id: ""
	I1222 01:51:37.613745 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.613753 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:37.613760 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:37.613828 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:37.638911 2149446 cri.go:96] found id: ""
	I1222 01:51:37.638936 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.638944 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:37.638950 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:37.639030 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:37.664184 2149446 cri.go:96] found id: ""
	I1222 01:51:37.664209 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.664218 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:37.664224 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:37.664336 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:37.689816 2149446 cri.go:96] found id: ""
	I1222 01:51:37.689840 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.689849 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:37.689855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:37.689915 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:37.714864 2149446 cri.go:96] found id: ""
	I1222 01:51:37.714888 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.714896 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:37.714905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:37.714919 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:37.731696 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:37.731722 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:37.802911 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:37.789256    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.794884    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.795605    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.797500    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.798024    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:37.789256    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.794884    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.795605    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.797500    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.798024    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:37.802930 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:37.802944 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:37.831970 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:37.832005 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:37.864010 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:37.864044 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:40.445669 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:40.458164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:40.458231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:40.484132 2149446 cri.go:96] found id: ""
	I1222 01:51:40.484159 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.484167 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:40.484173 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:40.484231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:40.510161 2149446 cri.go:96] found id: ""
	I1222 01:51:40.510187 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.510195 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:40.510201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:40.510263 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:40.537903 2149446 cri.go:96] found id: ""
	I1222 01:51:40.537929 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.537938 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:40.537944 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:40.538008 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:40.569673 2149446 cri.go:96] found id: ""
	I1222 01:51:40.569699 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.569707 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:40.569714 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:40.569773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:40.594097 2149446 cri.go:96] found id: ""
	I1222 01:51:40.594122 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.594130 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:40.594137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:40.594199 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:40.619442 2149446 cri.go:96] found id: ""
	I1222 01:51:40.619468 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.619477 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:40.619486 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:40.619545 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:40.644458 2149446 cri.go:96] found id: ""
	I1222 01:51:40.644485 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.644494 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:40.644500 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:40.644559 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:40.669598 2149446 cri.go:96] found id: ""
	I1222 01:51:40.669625 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.669634 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:40.669642 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:40.669655 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:40.686418 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:40.686488 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:40.747161 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:40.739409    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.739815    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.740969    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.741303    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.742747    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:40.739409    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.739815    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.740969    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.741303    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.742747    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:40.747183 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:40.747195 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:40.774671 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:40.774703 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:40.806536 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:40.806568 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:43.371926 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:43.382998 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:43.383098 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:43.418960 2149446 cri.go:96] found id: ""
	I1222 01:51:43.419037 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.419060 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:43.419081 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:43.419145 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:43.447476 2149446 cri.go:96] found id: ""
	I1222 01:51:43.447502 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.447511 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:43.447517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:43.447586 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:43.474323 2149446 cri.go:96] found id: ""
	I1222 01:51:43.474357 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.474367 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:43.474374 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:43.474448 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:43.500245 2149446 cri.go:96] found id: ""
	I1222 01:51:43.500323 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.500371 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:43.500390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:43.500484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:43.526338 2149446 cri.go:96] found id: ""
	I1222 01:51:43.526361 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.526369 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:43.526375 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:43.526434 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:43.551187 2149446 cri.go:96] found id: ""
	I1222 01:51:43.551211 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.551220 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:43.551227 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:43.551288 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:43.576838 2149446 cri.go:96] found id: ""
	I1222 01:51:43.576861 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.576870 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:43.576877 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:43.576939 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:43.603211 2149446 cri.go:96] found id: ""
	I1222 01:51:43.603283 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.603303 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:43.603326 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:43.603363 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:43.671168 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:43.671202 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:43.688181 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:43.688252 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:43.752146 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:43.744254    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.744955    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746431    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746780    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.748199    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:43.744254    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.744955    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746431    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746780    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.748199    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:43.752167 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:43.752178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:43.780617 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:43.780654 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:46.308524 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:46.318956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:46.319027 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:46.346481 2149446 cri.go:96] found id: ""
	I1222 01:51:46.346508 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.346517 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:46.346523 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:46.346582 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:46.375577 2149446 cri.go:96] found id: ""
	I1222 01:51:46.375603 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.375612 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:46.375618 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:46.375681 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:46.410693 2149446 cri.go:96] found id: ""
	I1222 01:51:46.410721 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.410730 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:46.410736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:46.410805 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:46.437367 2149446 cri.go:96] found id: ""
	I1222 01:51:46.437391 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.437400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:46.437407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:46.437466 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:46.463337 2149446 cri.go:96] found id: ""
	I1222 01:51:46.463364 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.463373 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:46.463379 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:46.463436 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:46.487509 2149446 cri.go:96] found id: ""
	I1222 01:51:46.487532 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.487540 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:46.487547 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:46.487609 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:46.513965 2149446 cri.go:96] found id: ""
	I1222 01:51:46.513989 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.513998 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:46.514037 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:46.514111 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:46.539974 2149446 cri.go:96] found id: ""
	I1222 01:51:46.539999 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.540008 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:46.540017 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:46.540032 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:46.604304 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:46.604345 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:46.620864 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:46.620894 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:46.682614 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:46.674017    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.674824    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676316    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676738    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.678200    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:46.674017    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.674824    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676316    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676738    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.678200    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:46.682639 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:46.682652 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:46.711167 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:46.711201 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:49.246342 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:49.256853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:49.256925 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:49.285523 2149446 cri.go:96] found id: ""
	I1222 01:51:49.285548 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.285557 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:49.285563 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:49.285621 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:49.314893 2149446 cri.go:96] found id: ""
	I1222 01:51:49.314915 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.314923 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:49.314930 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:49.314988 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:49.346252 2149446 cri.go:96] found id: ""
	I1222 01:51:49.346281 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.346290 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:49.346297 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:49.346353 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:49.380477 2149446 cri.go:96] found id: ""
	I1222 01:51:49.380505 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.380514 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:49.380521 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:49.380590 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:49.414563 2149446 cri.go:96] found id: ""
	I1222 01:51:49.414586 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.414595 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:49.414601 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:49.414660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:49.439047 2149446 cri.go:96] found id: ""
	I1222 01:51:49.439072 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.439081 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:49.439088 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:49.439171 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:49.465565 2149446 cri.go:96] found id: ""
	I1222 01:51:49.465592 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.465601 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:49.465607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:49.465685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:49.489816 2149446 cri.go:96] found id: ""
	I1222 01:51:49.489838 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.489847 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:49.489856 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:49.489871 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:49.555234 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:49.555269 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:49.573085 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:49.573124 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:49.640369 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:49.632082    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.632738    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634246    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634633    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.636132    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:49.632082    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.632738    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634246    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634633    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.636132    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:49.640390 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:49.640402 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:49.668820 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:49.668855 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:52.197481 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:52.208164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:52.208283 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:52.240218 2149446 cri.go:96] found id: ""
	I1222 01:51:52.240244 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.240252 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:52.240258 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:52.240325 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:52.267476 2149446 cri.go:96] found id: ""
	I1222 01:51:52.267503 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.267512 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:52.267518 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:52.267588 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:52.292286 2149446 cri.go:96] found id: ""
	I1222 01:51:52.292310 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.292319 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:52.292332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:52.292395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:52.320140 2149446 cri.go:96] found id: ""
	I1222 01:51:52.320164 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.320173 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:52.320179 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:52.320256 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:52.353854 2149446 cri.go:96] found id: ""
	I1222 01:51:52.353888 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.353897 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:52.353903 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:52.354001 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:52.383818 2149446 cri.go:96] found id: ""
	I1222 01:51:52.383851 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.383860 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:52.383867 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:52.383958 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:52.418591 2149446 cri.go:96] found id: ""
	I1222 01:51:52.418616 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.418624 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:52.418631 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:52.418714 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:52.449289 2149446 cri.go:96] found id: ""
	I1222 01:51:52.449383 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.449407 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:52.449430 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:52.449461 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:52.525065 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:52.525107 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:52.543910 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:52.543942 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:52.607708 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:52.599699    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.600218    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.601875    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.602334    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.603935    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:52.599699    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.600218    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.601875    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.602334    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.603935    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:52.607772 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:52.607790 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:52.636841 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:52.636879 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:55.167236 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:55.178565 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:55.178664 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:55.207461 2149446 cri.go:96] found id: ""
	I1222 01:51:55.207486 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.207501 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:55.207507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:55.207570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:55.232542 2149446 cri.go:96] found id: ""
	I1222 01:51:55.232615 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.232638 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:55.232657 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:55.232741 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:55.258972 2149446 cri.go:96] found id: ""
	I1222 01:51:55.258999 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.259008 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:55.259016 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:55.259078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:55.285371 2149446 cri.go:96] found id: ""
	I1222 01:51:55.285397 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.285406 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:55.285413 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:55.285484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:55.313304 2149446 cri.go:96] found id: ""
	I1222 01:51:55.313354 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.313363 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:55.313370 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:55.313439 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:55.347910 2149446 cri.go:96] found id: ""
	I1222 01:51:55.347940 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.347948 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:55.347955 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:55.348028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:55.381847 2149446 cri.go:96] found id: ""
	I1222 01:51:55.381869 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.381877 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:55.381884 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:55.381948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:55.418137 2149446 cri.go:96] found id: ""
	I1222 01:51:55.418161 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.418169 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:55.418179 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:55.418192 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:55.484714 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:55.484752 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:55.501483 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:55.501575 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:55.578456 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:55.569912    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.570649    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572190    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572838    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.574290    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:55.569912    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.570649    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572190    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572838    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.574290    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:55.578476 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:55.578487 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:55.606594 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:55.606627 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:58.138273 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:58.149164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:58.149233 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:58.176492 2149446 cri.go:96] found id: ""
	I1222 01:51:58.176517 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.176525 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:58.176532 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:58.176592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:58.200700 2149446 cri.go:96] found id: ""
	I1222 01:51:58.200724 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.200733 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:58.200740 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:58.200799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:58.227476 2149446 cri.go:96] found id: ""
	I1222 01:51:58.227501 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.227511 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:58.227517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:58.227582 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:58.252693 2149446 cri.go:96] found id: ""
	I1222 01:51:58.252720 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.252728 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:58.252736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:58.252801 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:58.277812 2149446 cri.go:96] found id: ""
	I1222 01:51:58.277835 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.277843 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:58.277850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:58.277910 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:58.303365 2149446 cri.go:96] found id: ""
	I1222 01:51:58.303400 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.303409 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:58.303432 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:58.303509 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:58.328480 2149446 cri.go:96] found id: ""
	I1222 01:51:58.328504 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.328513 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:58.328520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:58.328578 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:58.359399 2149446 cri.go:96] found id: ""
	I1222 01:51:58.359426 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.359436 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:58.359445 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:58.359456 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:58.391493 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:58.391529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:58.429463 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:58.429487 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:58.494632 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:58.494667 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:58.511259 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:58.511294 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:58.572875 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:58.564819    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.565262    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566536    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566986    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.568425    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:58.564819    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.565262    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566536    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566986    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.568425    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:01.073148 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:01.084069 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:01.084185 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:01.116512 2149446 cri.go:96] found id: ""
	I1222 01:52:01.116590 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.116626 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:01.116635 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:01.116729 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:01.155908 2149446 cri.go:96] found id: ""
	I1222 01:52:01.155940 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.155948 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:01.155956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:01.156039 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:01.190906 2149446 cri.go:96] found id: ""
	I1222 01:52:01.190992 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.191015 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:01.191036 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:01.191159 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:01.228209 2149446 cri.go:96] found id: ""
	I1222 01:52:01.228284 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.228300 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:01.228311 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:01.228390 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:01.258097 2149446 cri.go:96] found id: ""
	I1222 01:52:01.258122 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.258131 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:01.258138 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:01.258257 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:01.288999 2149446 cri.go:96] found id: ""
	I1222 01:52:01.289026 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.289035 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:01.289042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:01.289107 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:01.318779 2149446 cri.go:96] found id: ""
	I1222 01:52:01.318806 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.318817 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:01.318825 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:01.318896 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:01.354526 2149446 cri.go:96] found id: ""
	I1222 01:52:01.354551 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.354560 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:01.354568 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:01.354580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:01.377079 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:01.377178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:01.467471 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:01.456482    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.457416    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.459443    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.460466    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.462034    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:01.456482    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.457416    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.459443    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.460466    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.462034    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:01.467498 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:01.467512 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:01.497872 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:01.497924 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:01.538392 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:01.538422 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:04.105458 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:04.117694 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:04.117772 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:04.145575 2149446 cri.go:96] found id: ""
	I1222 01:52:04.145610 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.145618 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:04.145624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:04.145682 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:04.175381 2149446 cri.go:96] found id: ""
	I1222 01:52:04.175407 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.175416 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:04.175422 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:04.175501 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:04.201486 2149446 cri.go:96] found id: ""
	I1222 01:52:04.201511 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.201520 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:04.201526 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:04.201627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:04.226750 2149446 cri.go:96] found id: ""
	I1222 01:52:04.226776 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.226785 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:04.226791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:04.226868 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:04.254116 2149446 cri.go:96] found id: ""
	I1222 01:52:04.254180 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.254194 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:04.254201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:04.254260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:04.279876 2149446 cri.go:96] found id: ""
	I1222 01:52:04.279900 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.279909 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:04.279916 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:04.279979 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:04.305403 2149446 cri.go:96] found id: ""
	I1222 01:52:04.305425 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.305433 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:04.305439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:04.305500 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:04.334607 2149446 cri.go:96] found id: ""
	I1222 01:52:04.334633 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.334641 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:04.334650 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:04.334682 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:04.373276 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:04.373306 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:04.446848 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:04.446882 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:04.464981 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:04.465008 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:04.530912 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:04.522524    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.523282    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.524828    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.525152    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.526604    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:04.522524    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.523282    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.524828    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.525152    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.526604    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:04.530931 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:04.530943 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:07.063151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:07.073864 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:07.073937 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:07.099937 2149446 cri.go:96] found id: ""
	I1222 01:52:07.099966 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.099975 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:07.099981 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:07.100043 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:07.127907 2149446 cri.go:96] found id: ""
	I1222 01:52:07.127934 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.127943 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:07.127949 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:07.128009 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:07.162965 2149446 cri.go:96] found id: ""
	I1222 01:52:07.162991 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.162999 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:07.163006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:07.163067 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:07.192890 2149446 cri.go:96] found id: ""
	I1222 01:52:07.192916 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.192925 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:07.192932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:07.192994 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:07.221463 2149446 cri.go:96] found id: ""
	I1222 01:52:07.221495 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.221504 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:07.221512 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:07.221572 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:07.247489 2149446 cri.go:96] found id: ""
	I1222 01:52:07.247517 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.247527 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:07.247535 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:07.247600 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:07.272728 2149446 cri.go:96] found id: ""
	I1222 01:52:07.272755 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.272764 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:07.272771 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:07.272849 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:07.297364 2149446 cri.go:96] found id: ""
	I1222 01:52:07.297389 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.297398 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:07.297407 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:07.297418 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:07.325214 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:07.325251 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:07.360148 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:07.360177 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:07.432261 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:07.432295 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:07.450157 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:07.450186 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:07.516409 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:07.507778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.508648    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510237    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.512438    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:07.507778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.508648    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510237    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.512438    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:10.017883 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:10.030135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:10.030223 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:10.060521 2149446 cri.go:96] found id: ""
	I1222 01:52:10.060547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.060556 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:10.060563 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:10.060627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:10.088074 2149446 cri.go:96] found id: ""
	I1222 01:52:10.088112 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.088124 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:10.088133 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:10.088200 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:10.130279 2149446 cri.go:96] found id: ""
	I1222 01:52:10.130306 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.130315 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:10.130321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:10.130387 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:10.163042 2149446 cri.go:96] found id: ""
	I1222 01:52:10.163067 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.163078 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:10.163085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:10.163147 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:10.190658 2149446 cri.go:96] found id: ""
	I1222 01:52:10.190684 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.190693 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:10.190700 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:10.190763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:10.218153 2149446 cri.go:96] found id: ""
	I1222 01:52:10.218181 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.218189 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:10.218197 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:10.218261 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:10.244623 2149446 cri.go:96] found id: ""
	I1222 01:52:10.244647 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.244656 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:10.244662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:10.244755 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:10.270908 2149446 cri.go:96] found id: ""
	I1222 01:52:10.270931 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.270939 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:10.270948 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:10.270959 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:10.299810 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:10.299846 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:10.331687 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:10.331716 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:10.407109 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:10.407148 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:10.425255 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:10.425286 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:10.506189 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:10.497058    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.497925    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.499600    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.500185    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.501961    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:10.497058    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.497925    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.499600    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.500185    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.501961    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:13.007836 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:13.019282 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:13.019388 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:13.045528 2149446 cri.go:96] found id: ""
	I1222 01:52:13.045553 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.045562 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:13.045568 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:13.045631 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:13.071460 2149446 cri.go:96] found id: ""
	I1222 01:52:13.071528 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.071542 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:13.071549 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:13.071623 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:13.103717 2149446 cri.go:96] found id: ""
	I1222 01:52:13.103749 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.103758 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:13.103765 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:13.103835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:13.133730 2149446 cri.go:96] found id: ""
	I1222 01:52:13.133757 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.133772 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:13.133787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:13.133846 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:13.170719 2149446 cri.go:96] found id: ""
	I1222 01:52:13.170799 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.170841 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:13.170860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:13.170948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:13.197725 2149446 cri.go:96] found id: ""
	I1222 01:52:13.197801 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.197823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:13.197842 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:13.197926 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:13.224209 2149446 cri.go:96] found id: ""
	I1222 01:52:13.224404 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.224492 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:13.224517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:13.224639 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:13.249315 2149446 cri.go:96] found id: ""
	I1222 01:52:13.249353 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.249362 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:13.249371 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:13.249382 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:13.280377 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:13.280404 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:13.346072 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:13.346106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:13.364038 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:13.364070 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:13.433131 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:13.424970    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.425643    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427164    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427697    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.429162    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:13.424970    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.425643    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427164    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427697    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.429162    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:13.433163 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:13.433175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:15.961525 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:15.972093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:15.972167 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:15.996879 2149446 cri.go:96] found id: ""
	I1222 01:52:15.996913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:15.996922 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:15.996929 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:15.996994 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:16.024677 2149446 cri.go:96] found id: ""
	I1222 01:52:16.024700 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.024709 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:16.024715 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:16.024777 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:16.050558 2149446 cri.go:96] found id: ""
	I1222 01:52:16.050590 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.050599 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:16.050606 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:16.050666 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:16.076374 2149446 cri.go:96] found id: ""
	I1222 01:52:16.076400 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.076409 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:16.076416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:16.076476 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:16.107298 2149446 cri.go:96] found id: ""
	I1222 01:52:16.107323 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.107331 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:16.107339 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:16.107402 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:16.139868 2149446 cri.go:96] found id: ""
	I1222 01:52:16.139895 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.139904 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:16.139911 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:16.139972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:16.170608 2149446 cri.go:96] found id: ""
	I1222 01:52:16.170631 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.170640 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:16.170646 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:16.170708 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:16.195886 2149446 cri.go:96] found id: ""
	I1222 01:52:16.195913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.195932 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:16.195941 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:16.195954 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:16.262211 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:16.262249 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:16.280423 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:16.280452 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:16.356185 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:16.346872    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.347685    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.349384    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.350021    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.351608    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:16.346872    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.347685    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.349384    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.350021    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.351608    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:16.356258 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:16.356286 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:16.391461 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:16.391538 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:18.932633 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:18.944359 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:18.944431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:18.971356 2149446 cri.go:96] found id: ""
	I1222 01:52:18.971379 2149446 logs.go:282] 0 containers: []
	W1222 01:52:18.971388 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:18.971394 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:18.971457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:18.998044 2149446 cri.go:96] found id: ""
	I1222 01:52:18.998070 2149446 logs.go:282] 0 containers: []
	W1222 01:52:18.998078 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:18.998085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:18.998149 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:19.028529 2149446 cri.go:96] found id: ""
	I1222 01:52:19.028553 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.028566 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:19.028573 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:19.028633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:19.052808 2149446 cri.go:96] found id: ""
	I1222 01:52:19.052833 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.052843 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:19.052850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:19.052910 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:19.077981 2149446 cri.go:96] found id: ""
	I1222 01:52:19.078004 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.078012 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:19.078019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:19.078080 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:19.105393 2149446 cri.go:96] found id: ""
	I1222 01:52:19.105422 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.105433 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:19.105439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:19.105498 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:19.146968 2149446 cri.go:96] found id: ""
	I1222 01:52:19.146995 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.147004 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:19.147010 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:19.147075 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:19.176941 2149446 cri.go:96] found id: ""
	I1222 01:52:19.176965 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.176974 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:19.176983 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:19.176995 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:19.246864 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:19.246904 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:19.266042 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:19.266070 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:19.330582 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:19.322391    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.323105    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.324610    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.325067    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.326613    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:19.322391    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.323105    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.324610    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.325067    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.326613    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:19.330602 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:19.330615 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:19.366422 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:19.366457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:21.896072 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:21.906705 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:21.906774 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:21.931888 2149446 cri.go:96] found id: ""
	I1222 01:52:21.931913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.931921 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:21.931928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:21.931988 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:21.959852 2149446 cri.go:96] found id: ""
	I1222 01:52:21.959925 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.959959 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:21.959984 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:21.960070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:21.984377 2149446 cri.go:96] found id: ""
	I1222 01:52:21.984402 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.984410 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:21.984423 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:21.984485 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:22.015185 2149446 cri.go:96] found id: ""
	I1222 01:52:22.015212 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.015221 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:22.015227 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:22.015290 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:22.041402 2149446 cri.go:96] found id: ""
	I1222 01:52:22.041427 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.041436 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:22.041442 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:22.041505 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:22.069753 2149446 cri.go:96] found id: ""
	I1222 01:52:22.069780 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.069790 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:22.069797 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:22.069863 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:22.100905 2149446 cri.go:96] found id: ""
	I1222 01:52:22.100941 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.100950 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:22.100956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:22.101028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:22.133734 2149446 cri.go:96] found id: ""
	I1222 01:52:22.133770 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.133778 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:22.133787 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:22.133799 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:22.219653 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:22.219690 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:22.236668 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:22.236699 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:22.306717 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:22.298969    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.299431    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301029    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301450    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.302886    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:22.298969    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.299431    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301029    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301450    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.302886    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:22.306779 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:22.306799 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:22.334960 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:22.334996 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:24.888617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:24.899592 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:24.899667 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:24.924136 2149446 cri.go:96] found id: ""
	I1222 01:52:24.924161 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.924171 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:24.924177 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:24.924237 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:24.949757 2149446 cri.go:96] found id: ""
	I1222 01:52:24.949782 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.949791 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:24.949797 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:24.949862 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:24.975441 2149446 cri.go:96] found id: ""
	I1222 01:52:24.975468 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.975476 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:24.975482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:24.975544 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:24.999474 2149446 cri.go:96] found id: ""
	I1222 01:52:24.999500 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.999509 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:24.999517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:24.999624 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:25.034630 2149446 cri.go:96] found id: ""
	I1222 01:52:25.034655 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.034663 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:25.034670 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:25.034737 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:25.060352 2149446 cri.go:96] found id: ""
	I1222 01:52:25.060375 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.060385 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:25.060392 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:25.060516 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:25.088743 2149446 cri.go:96] found id: ""
	I1222 01:52:25.088768 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.088777 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:25.088784 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:25.088845 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:25.116748 2149446 cri.go:96] found id: ""
	I1222 01:52:25.116781 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.116790 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:25.116800 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:25.116813 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:25.198386 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:25.198430 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:25.216584 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:25.216619 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:25.285946 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:25.278237    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.278629    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280115    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280453    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.281691    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:25.278237    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.278629    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280115    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280453    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.281691    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:25.285971 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:25.285986 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:25.313622 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:25.313652 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:27.852014 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:27.862934 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:27.863049 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:27.888699 2149446 cri.go:96] found id: ""
	I1222 01:52:27.888735 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.888744 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:27.888751 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:27.888822 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:27.914148 2149446 cri.go:96] found id: ""
	I1222 01:52:27.914223 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.914246 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:27.914259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:27.914337 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:27.939481 2149446 cri.go:96] found id: ""
	I1222 01:52:27.939560 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.939581 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:27.939595 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:27.939668 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:27.967633 2149446 cri.go:96] found id: ""
	I1222 01:52:27.967662 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.967671 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:27.967678 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:27.967785 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:27.991808 2149446 cri.go:96] found id: ""
	I1222 01:52:27.991889 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.991911 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:27.991932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:27.992037 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:28.023583 2149446 cri.go:96] found id: ""
	I1222 01:52:28.023622 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.023631 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:28.023638 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:28.023709 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:28.052545 2149446 cri.go:96] found id: ""
	I1222 01:52:28.052625 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.052648 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:28.052672 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:28.052763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:28.078522 2149446 cri.go:96] found id: ""
	I1222 01:52:28.078550 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.078558 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:28.078567 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:28.078580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:28.147757 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:28.147792 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:28.165530 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:28.165560 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:28.232073 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:28.223681    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.224341    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.225912    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.226397    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.228007    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:28.223681    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.224341    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.225912    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.226397    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.228007    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:28.232094 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:28.232107 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:28.260533 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:28.260569 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:30.796554 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:30.807742 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:30.807813 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:30.835334 2149446 cri.go:96] found id: ""
	I1222 01:52:30.835361 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.835370 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:30.835377 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:30.835467 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:30.861061 2149446 cri.go:96] found id: ""
	I1222 01:52:30.861101 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.861113 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:30.861121 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:30.861195 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:30.888925 2149446 cri.go:96] found id: ""
	I1222 01:52:30.888999 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.889021 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:30.889042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:30.889128 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:30.916374 2149446 cri.go:96] found id: ""
	I1222 01:52:30.916448 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.916470 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:30.916490 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:30.916583 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:30.941908 2149446 cri.go:96] found id: ""
	I1222 01:52:30.941932 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.941940 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:30.941947 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:30.942017 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:30.968265 2149446 cri.go:96] found id: ""
	I1222 01:52:30.968290 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.968298 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:30.968305 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:30.968372 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:30.995148 2149446 cri.go:96] found id: ""
	I1222 01:52:30.995174 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.995183 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:30.995189 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:30.995250 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:31.022834 2149446 cri.go:96] found id: ""
	I1222 01:52:31.022861 2149446 logs.go:282] 0 containers: []
	W1222 01:52:31.022870 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:31.022879 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:31.022891 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:31.083576 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:31.075792    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.076576    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.077772    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.078220    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.079697    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:31.075792    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.076576    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.077772    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.078220    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.079697    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:31.083600 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:31.083613 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:31.120484 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:31.120518 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:31.156273 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:31.156303 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:31.226587 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:31.226623 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:33.743333 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:33.753911 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:33.753986 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:33.779167 2149446 cri.go:96] found id: ""
	I1222 01:52:33.779192 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.779201 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:33.779207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:33.779265 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:33.804863 2149446 cri.go:96] found id: ""
	I1222 01:52:33.804892 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.804901 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:33.804913 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:33.804972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:33.830391 2149446 cri.go:96] found id: ""
	I1222 01:52:33.830414 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.830421 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:33.830428 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:33.830488 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:33.856024 2149446 cri.go:96] found id: ""
	I1222 01:52:33.856047 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.856057 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:33.856064 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:33.856121 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:33.885352 2149446 cri.go:96] found id: ""
	I1222 01:52:33.885375 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.885383 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:33.885389 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:33.885454 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:33.910795 2149446 cri.go:96] found id: ""
	I1222 01:52:33.910819 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.910827 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:33.910835 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:33.910922 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:33.937408 2149446 cri.go:96] found id: ""
	I1222 01:52:33.937435 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.937444 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:33.937451 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:33.937513 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:33.964011 2149446 cri.go:96] found id: ""
	I1222 01:52:33.964037 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.964046 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:33.964055 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:33.964069 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:34.030109 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:34.021515    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.022250    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.023865    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.024450    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.026222    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:34.021515    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.022250    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.023865    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.024450    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.026222    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:34.030135 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:34.030147 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:34.058869 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:34.058905 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:34.090878 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:34.090905 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:34.173693 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:34.173734 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:36.691703 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:36.703200 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:36.703287 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:36.729524 2149446 cri.go:96] found id: ""
	I1222 01:52:36.729547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.729556 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:36.729562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:36.729620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:36.755426 2149446 cri.go:96] found id: ""
	I1222 01:52:36.755451 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.755459 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:36.755465 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:36.755522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:36.780359 2149446 cri.go:96] found id: ""
	I1222 01:52:36.780384 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.780393 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:36.780399 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:36.780459 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:36.810342 2149446 cri.go:96] found id: ""
	I1222 01:52:36.810370 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.810380 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:36.810386 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:36.810447 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:36.836550 2149446 cri.go:96] found id: ""
	I1222 01:52:36.836586 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.836597 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:36.836619 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:36.836702 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:36.863450 2149446 cri.go:96] found id: ""
	I1222 01:52:36.863476 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.863485 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:36.863492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:36.863554 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:36.890913 2149446 cri.go:96] found id: ""
	I1222 01:52:36.890938 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.890947 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:36.890953 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:36.891014 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:36.921826 2149446 cri.go:96] found id: ""
	I1222 01:52:36.921849 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.921858 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:36.921867 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:36.921879 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:36.938824 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:36.938854 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:37.002681 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:36.993721    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.994257    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.995809    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.996244    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.997718    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:36.993721    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.994257    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.995809    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.996244    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.997718    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:37.002770 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:37.002797 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:37.038076 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:37.038111 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:37.065632 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:37.065659 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:39.635717 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:39.650457 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:39.650526 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:39.678446 2149446 cri.go:96] found id: ""
	I1222 01:52:39.678470 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.678479 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:39.678485 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:39.678543 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:39.703602 2149446 cri.go:96] found id: ""
	I1222 01:52:39.703627 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.703636 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:39.703643 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:39.703702 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:39.728380 2149446 cri.go:96] found id: ""
	I1222 01:52:39.728406 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.728415 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:39.728421 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:39.728484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:39.753237 2149446 cri.go:96] found id: ""
	I1222 01:52:39.753261 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.753269 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:39.753275 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:39.753373 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:39.778213 2149446 cri.go:96] found id: ""
	I1222 01:52:39.778244 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.778253 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:39.778259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:39.778319 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:39.808901 2149446 cri.go:96] found id: ""
	I1222 01:52:39.808925 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.808933 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:39.808940 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:39.808997 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:39.834452 2149446 cri.go:96] found id: ""
	I1222 01:52:39.834478 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.834488 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:39.834495 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:39.834556 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:39.859475 2149446 cri.go:96] found id: ""
	I1222 01:52:39.859498 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.859507 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:39.859515 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:39.859526 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:39.893788 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:39.893816 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:39.958922 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:39.958959 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:39.975346 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:39.975380 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:40.062822 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:40.052520    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.053798    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.054613    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.056904    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.057401    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:40.052520    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.053798    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.054613    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.056904    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.057401    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:40.062851 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:40.062864 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:42.596761 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:42.611989 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:42.612054 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:42.650231 2149446 cri.go:96] found id: ""
	I1222 01:52:42.650253 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.650261 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:42.650268 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:42.650326 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:42.677520 2149446 cri.go:96] found id: ""
	I1222 01:52:42.677547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.677556 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:42.677562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:42.677620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:42.702450 2149446 cri.go:96] found id: ""
	I1222 01:52:42.702478 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.702487 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:42.702493 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:42.702549 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:42.727875 2149446 cri.go:96] found id: ""
	I1222 01:52:42.727947 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.727975 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:42.727994 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:42.728107 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:42.753077 2149446 cri.go:96] found id: ""
	I1222 01:52:42.753159 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.753187 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:42.753207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:42.753360 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:42.779648 2149446 cri.go:96] found id: ""
	I1222 01:52:42.779671 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.779679 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:42.779686 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:42.779761 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:42.808099 2149446 cri.go:96] found id: ""
	I1222 01:52:42.808172 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.808181 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:42.808188 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:42.808247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:42.834339 2149446 cri.go:96] found id: ""
	I1222 01:52:42.834367 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.834376 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:42.834385 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:42.834399 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:42.899680 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:42.899717 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:42.916710 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:42.916749 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:42.983675 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:42.975196    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.975990    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.977681    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.978270    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.979868    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:42.975196    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.975990    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.977681    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.978270    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.979868    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:42.983698 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:42.983710 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:43.011581 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:43.011616 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:45.543456 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:45.553928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:45.554038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:45.578837 2149446 cri.go:96] found id: ""
	I1222 01:52:45.578860 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.578868 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:45.578874 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:45.578932 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:45.612293 2149446 cri.go:96] found id: ""
	I1222 01:52:45.612315 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.612323 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:45.612340 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:45.612407 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:45.648009 2149446 cri.go:96] found id: ""
	I1222 01:52:45.648043 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.648052 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:45.648059 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:45.648157 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:45.675586 2149446 cri.go:96] found id: ""
	I1222 01:52:45.675609 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.675618 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:45.675624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:45.675686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:45.702507 2149446 cri.go:96] found id: ""
	I1222 01:52:45.702576 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.702597 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:45.702610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:45.702686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:45.729853 2149446 cri.go:96] found id: ""
	I1222 01:52:45.729921 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.729943 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:45.729964 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:45.730047 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:45.760311 2149446 cri.go:96] found id: ""
	I1222 01:52:45.760387 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.760420 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:45.760439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:45.760522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:45.785968 2149446 cri.go:96] found id: ""
	I1222 01:52:45.785995 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.786005 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:45.786014 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:45.786027 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:45.854201 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:45.854240 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:45.871755 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:45.871787 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:45.937315 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:45.929306    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.929882    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.931547    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.932074    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.933637    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:45.929306    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.929882    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.931547    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.932074    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.933637    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:45.937355 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:45.937368 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:45.966159 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:45.966197 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:48.499646 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:48.510238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:48.510330 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:48.536889 2149446 cri.go:96] found id: ""
	I1222 01:52:48.536923 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.536932 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:48.536939 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:48.537006 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:48.563218 2149446 cri.go:96] found id: ""
	I1222 01:52:48.563284 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.563305 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:48.563317 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:48.563393 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:48.588527 2149446 cri.go:96] found id: ""
	I1222 01:52:48.588561 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.588570 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:48.588576 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:48.588643 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:48.621974 2149446 cri.go:96] found id: ""
	I1222 01:52:48.622045 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.622080 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:48.622106 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:48.622197 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:48.654895 2149446 cri.go:96] found id: ""
	I1222 01:52:48.654970 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.654992 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:48.655014 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:48.655102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:48.692809 2149446 cri.go:96] found id: ""
	I1222 01:52:48.692885 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.692911 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:48.692931 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:48.693056 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:48.717194 2149446 cri.go:96] found id: ""
	I1222 01:52:48.717272 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.717294 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:48.717314 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:48.717425 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:48.742591 2149446 cri.go:96] found id: ""
	I1222 01:52:48.742616 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.742625 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:48.742633 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:48.742663 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:48.801798 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:48.794377    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.794846    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796207    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796539    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.798000    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:48.794377    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.794846    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796207    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796539    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.798000    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:48.801831 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:48.801843 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:48.828911 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:48.828944 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:48.860768 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:48.860798 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:48.925684 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:48.925719 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:51.443762 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:51.454909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:51.454993 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:51.484486 2149446 cri.go:96] found id: ""
	I1222 01:52:51.484511 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.484520 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:51.484526 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:51.484590 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:51.510790 2149446 cri.go:96] found id: ""
	I1222 01:52:51.510814 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.510822 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:51.510828 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:51.510886 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:51.537007 2149446 cri.go:96] found id: ""
	I1222 01:52:51.537030 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.537039 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:51.537050 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:51.537110 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:51.563454 2149446 cri.go:96] found id: ""
	I1222 01:52:51.563479 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.563487 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:51.563494 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:51.563551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:51.590407 2149446 cri.go:96] found id: ""
	I1222 01:52:51.590433 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.590442 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:51.590448 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:51.590512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:51.626498 2149446 cri.go:96] found id: ""
	I1222 01:52:51.626525 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.626533 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:51.626540 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:51.626604 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:51.653575 2149446 cri.go:96] found id: ""
	I1222 01:52:51.653601 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.653610 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:51.653616 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:51.653676 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:51.686395 2149446 cri.go:96] found id: ""
	I1222 01:52:51.686425 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.686435 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:51.686444 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:51.686457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:51.756701 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:51.756737 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:51.773423 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:51.773525 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:51.837429 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:51.829362    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.830109    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.831677    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.832019    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.833581    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:51.829362    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.830109    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.831677    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.832019    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.833581    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:51.837453 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:51.837467 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:51.870792 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:51.870830 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:54.401483 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:54.411791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:54.411865 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:54.439704 2149446 cri.go:96] found id: ""
	I1222 01:52:54.439729 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.439737 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:54.439743 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:54.439811 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:54.464997 2149446 cri.go:96] found id: ""
	I1222 01:52:54.465020 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.465029 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:54.465047 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:54.465112 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:54.489955 2149446 cri.go:96] found id: ""
	I1222 01:52:54.489979 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.489988 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:54.489994 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:54.490051 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:54.520365 2149446 cri.go:96] found id: ""
	I1222 01:52:54.520390 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.520400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:54.520406 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:54.520463 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:54.546850 2149446 cri.go:96] found id: ""
	I1222 01:52:54.546875 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.546883 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:54.546890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:54.546971 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:54.572315 2149446 cri.go:96] found id: ""
	I1222 01:52:54.572353 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.572363 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:54.572385 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:54.572466 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:54.606370 2149446 cri.go:96] found id: ""
	I1222 01:52:54.606405 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.606414 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:54.606421 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:54.606494 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:54.640572 2149446 cri.go:96] found id: ""
	I1222 01:52:54.640598 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.640607 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:54.640616 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:54.640628 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:54.708604 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:54.700120    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.700711    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.702391    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.703169    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.704748    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:54.700120    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.700711    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.702391    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.703169    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.704748    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:54.708624 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:54.708637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:54.737164 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:54.737199 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:54.768207 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:54.768291 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:54.833444 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:54.833481 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:57.350685 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:57.362041 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:57.362120 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:57.395509 2149446 cri.go:96] found id: ""
	I1222 01:52:57.395533 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.395542 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:57.395548 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:57.395606 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:57.422756 2149446 cri.go:96] found id: ""
	I1222 01:52:57.422778 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.422787 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:57.422794 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:57.422856 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:57.449198 2149446 cri.go:96] found id: ""
	I1222 01:52:57.449274 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.449297 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:57.449332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:57.449410 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:57.475089 2149446 cri.go:96] found id: ""
	I1222 01:52:57.475116 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.475136 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:57.475142 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:57.475239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:57.500721 2149446 cri.go:96] found id: ""
	I1222 01:52:57.500747 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.500755 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:57.500763 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:57.500823 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:57.526256 2149446 cri.go:96] found id: ""
	I1222 01:52:57.526283 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.526292 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:57.526299 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:57.526361 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:57.551020 2149446 cri.go:96] found id: ""
	I1222 01:52:57.551098 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.551122 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:57.551135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:57.551251 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:57.578237 2149446 cri.go:96] found id: ""
	I1222 01:52:57.578264 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.578272 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:57.578281 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:57.578293 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:57.647976 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:57.648013 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:57.669609 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:57.669637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:57.736734 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:57.727886    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.728902    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730342    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730664    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.732062    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:57.727886    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.728902    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730342    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730664    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.732062    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:57.736758 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:57.736777 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:57.765104 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:57.765143 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:00.296700 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:00.314134 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:00.314362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:00.360664 2149446 cri.go:96] found id: ""
	I1222 01:53:00.360694 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.360703 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:00.360709 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:00.360787 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:00.400391 2149446 cri.go:96] found id: ""
	I1222 01:53:00.400434 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.400447 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:00.400458 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:00.400534 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:00.437859 2149446 cri.go:96] found id: ""
	I1222 01:53:00.437883 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.437892 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:00.437899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:00.437960 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:00.468446 2149446 cri.go:96] found id: ""
	I1222 01:53:00.468514 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.468536 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:00.468555 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:00.468641 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:00.494431 2149446 cri.go:96] found id: ""
	I1222 01:53:00.494462 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.494471 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:00.494486 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:00.494564 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:00.520633 2149446 cri.go:96] found id: ""
	I1222 01:53:00.520660 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.520669 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:00.520675 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:00.520735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:00.552618 2149446 cri.go:96] found id: ""
	I1222 01:53:00.552646 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.552655 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:00.552661 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:00.552721 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:00.578469 2149446 cri.go:96] found id: ""
	I1222 01:53:00.578492 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.578500 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:00.578509 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:00.578523 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:00.650905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:00.650939 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:00.669192 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:00.669220 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:00.732631 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:00.724784    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.725580    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727143    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727454    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.728888    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:00.724784    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.725580    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727143    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727454    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.728888    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:00.732650 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:00.732662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:00.761586 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:00.761617 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:03.289824 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:03.300567 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:03.300641 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:03.326667 2149446 cri.go:96] found id: ""
	I1222 01:53:03.326690 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.326698 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:03.326705 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:03.326763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:03.363614 2149446 cri.go:96] found id: ""
	I1222 01:53:03.363637 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.363652 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:03.363662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:03.363743 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:03.391010 2149446 cri.go:96] found id: ""
	I1222 01:53:03.391033 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.391047 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:03.391054 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:03.391126 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:03.419391 2149446 cri.go:96] found id: ""
	I1222 01:53:03.419418 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.419426 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:03.419433 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:03.419494 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:03.445436 2149446 cri.go:96] found id: ""
	I1222 01:53:03.445461 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.445470 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:03.445476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:03.445534 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:03.475787 2149446 cri.go:96] found id: ""
	I1222 01:53:03.475812 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.475821 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:03.475827 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:03.475894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:03.502216 2149446 cri.go:96] found id: ""
	I1222 01:53:03.502247 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.502256 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:03.502262 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:03.502323 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:03.532273 2149446 cri.go:96] found id: ""
	I1222 01:53:03.532294 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.532303 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:03.532312 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:03.532323 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:03.548937 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:03.548964 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:03.633734 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:03.626155    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.626558    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628013    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628303    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.629725    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:03.626155    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.626558    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628013    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628303    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.629725    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:03.633808 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:03.633837 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:03.665918 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:03.665950 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:03.698908 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:03.698933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:06.272029 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:06.282999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:06.283072 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:06.309858 2149446 cri.go:96] found id: ""
	I1222 01:53:06.309882 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.309890 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:06.309896 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:06.309954 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:06.334809 2149446 cri.go:96] found id: ""
	I1222 01:53:06.334837 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.334845 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:06.334852 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:06.334922 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:06.366780 2149446 cri.go:96] found id: ""
	I1222 01:53:06.366807 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.366816 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:06.366823 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:06.366884 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:06.396941 2149446 cri.go:96] found id: ""
	I1222 01:53:06.397018 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.397040 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:06.397057 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:06.397162 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:06.424268 2149446 cri.go:96] found id: ""
	I1222 01:53:06.424294 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.424302 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:06.424309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:06.424384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:06.451699 2149446 cri.go:96] found id: ""
	I1222 01:53:06.451723 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.451732 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:06.451739 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:06.451799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:06.480588 2149446 cri.go:96] found id: ""
	I1222 01:53:06.480610 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.480618 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:06.480624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:06.480685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:06.508393 2149446 cri.go:96] found id: ""
	I1222 01:53:06.508419 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.508428 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:06.508437 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:06.508449 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:06.525070 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:06.525101 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:06.590361 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:06.581622    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.582460    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.583922    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.584374    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.586045    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:06.581622    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.582460    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.583922    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.584374    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.586045    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:06.590383 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:06.590398 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:06.624081 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:06.624115 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:06.663556 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:06.663585 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:09.229815 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:09.240655 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:09.240737 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:09.265982 2149446 cri.go:96] found id: ""
	I1222 01:53:09.266004 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.266013 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:09.266019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:09.266078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:09.291816 2149446 cri.go:96] found id: ""
	I1222 01:53:09.291841 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.291849 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:09.291855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:09.291914 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:09.321089 2149446 cri.go:96] found id: ""
	I1222 01:53:09.321114 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.321123 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:09.321129 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:09.321187 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:09.351795 2149446 cri.go:96] found id: ""
	I1222 01:53:09.351824 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.351833 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:09.351839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:09.351897 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:09.386800 2149446 cri.go:96] found id: ""
	I1222 01:53:09.386828 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.386837 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:09.386844 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:09.386902 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:09.416549 2149446 cri.go:96] found id: ""
	I1222 01:53:09.416574 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.416582 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:09.416589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:09.416648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:09.442626 2149446 cri.go:96] found id: ""
	I1222 01:53:09.442649 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.442662 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:09.442671 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:09.442770 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:09.468113 2149446 cri.go:96] found id: ""
	I1222 01:53:09.468187 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.468209 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:09.468230 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:09.468270 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:09.531958 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:09.523337    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.523854    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.525572    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.526159    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.527780    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:09.523337    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.523854    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.525572    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.526159    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.527780    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:09.531982 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:09.531994 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:09.559710 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:09.559743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:09.601932 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:09.602030 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:09.676284 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:09.676317 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:12.195759 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:12.206480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:12.206553 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:12.235418 2149446 cri.go:96] found id: ""
	I1222 01:53:12.235491 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.235513 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:12.235527 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:12.235611 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:12.261441 2149446 cri.go:96] found id: ""
	I1222 01:53:12.261467 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.261476 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:12.261482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:12.261551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:12.291019 2149446 cri.go:96] found id: ""
	I1222 01:53:12.291044 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.291053 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:12.291059 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:12.291117 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:12.317108 2149446 cri.go:96] found id: ""
	I1222 01:53:12.317132 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.317152 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:12.317160 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:12.317222 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:12.345136 2149446 cri.go:96] found id: ""
	I1222 01:53:12.345164 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.345173 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:12.345180 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:12.345239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:12.372959 2149446 cri.go:96] found id: ""
	I1222 01:53:12.372991 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.373000 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:12.373006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:12.373076 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:12.413013 2149446 cri.go:96] found id: ""
	I1222 01:53:12.413049 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.413058 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:12.413065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:12.413134 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:12.439197 2149446 cri.go:96] found id: ""
	I1222 01:53:12.439225 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.439235 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:12.439245 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:12.439256 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:12.469256 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:12.469285 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:12.534711 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:12.534748 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:12.551566 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:12.551637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:12.624758 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:12.616140    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.617087    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.618606    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.619192    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.620695    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:12.616140    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.617087    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.618606    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.619192    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.620695    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:12.624828 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:12.624858 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:15.157013 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:15.168255 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:15.168328 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:15.195462 2149446 cri.go:96] found id: ""
	I1222 01:53:15.195486 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.195494 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:15.195501 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:15.195560 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:15.227170 2149446 cri.go:96] found id: ""
	I1222 01:53:15.227198 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.227208 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:15.227215 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:15.227282 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:15.254951 2149446 cri.go:96] found id: ""
	I1222 01:53:15.254975 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.254983 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:15.254990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:15.255073 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:15.280554 2149446 cri.go:96] found id: ""
	I1222 01:53:15.280577 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.280586 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:15.280592 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:15.280677 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:15.306166 2149446 cri.go:96] found id: ""
	I1222 01:53:15.306189 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.306198 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:15.306203 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:15.306261 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:15.331390 2149446 cri.go:96] found id: ""
	I1222 01:53:15.331414 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.331422 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:15.331429 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:15.331487 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:15.372404 2149446 cri.go:96] found id: ""
	I1222 01:53:15.372433 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.372442 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:15.372449 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:15.372512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:15.404854 2149446 cri.go:96] found id: ""
	I1222 01:53:15.404894 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.404902 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:15.404918 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:15.404930 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:15.469796 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:15.469833 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:15.487192 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:15.487222 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:15.550547 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:15.541963    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.542670    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.544415    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.545061    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.546601    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:15.541963    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.542670    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.544415    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.545061    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.546601    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:15.550568 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:15.550582 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:15.581523 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:15.581556 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:18.130208 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:18.141246 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:18.141343 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:18.168196 2149446 cri.go:96] found id: ""
	I1222 01:53:18.168219 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.168227 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:18.168234 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:18.168294 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:18.194001 2149446 cri.go:96] found id: ""
	I1222 01:53:18.194025 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.194034 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:18.194040 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:18.194101 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:18.220324 2149446 cri.go:96] found id: ""
	I1222 01:53:18.220354 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.220362 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:18.220369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:18.220427 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:18.249998 2149446 cri.go:96] found id: ""
	I1222 01:53:18.250027 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.250036 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:18.250042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:18.250102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:18.275763 2149446 cri.go:96] found id: ""
	I1222 01:53:18.275789 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.275798 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:18.275805 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:18.275863 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:18.304972 2149446 cri.go:96] found id: ""
	I1222 01:53:18.305001 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.305009 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:18.305015 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:18.305074 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:18.330117 2149446 cri.go:96] found id: ""
	I1222 01:53:18.330139 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.330147 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:18.330154 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:18.330214 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:18.368632 2149446 cri.go:96] found id: ""
	I1222 01:53:18.368663 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.368672 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:18.368680 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:18.368727 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:18.442905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:18.442946 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:18.459582 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:18.459658 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:18.528179 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:18.516076    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.516516    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.522440    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.523458    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.524496    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:18.516076    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.516516    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.522440    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.523458    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.524496    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:18.528201 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:18.528213 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:18.555680 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:18.555710 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:21.084080 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:21.094603 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:21.094678 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:21.118899 2149446 cri.go:96] found id: ""
	I1222 01:53:21.118926 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.118936 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:21.118943 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:21.119001 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:21.147861 2149446 cri.go:96] found id: ""
	I1222 01:53:21.147890 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.147898 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:21.147905 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:21.147968 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:21.172300 2149446 cri.go:96] found id: ""
	I1222 01:53:21.172328 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.172344 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:21.172350 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:21.172408 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:21.196518 2149446 cri.go:96] found id: ""
	I1222 01:53:21.196547 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.196555 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:21.196561 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:21.196621 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:21.222351 2149446 cri.go:96] found id: ""
	I1222 01:53:21.222373 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.222381 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:21.222387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:21.222445 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:21.247014 2149446 cri.go:96] found id: ""
	I1222 01:53:21.247056 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.247065 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:21.247072 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:21.247130 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:21.272507 2149446 cri.go:96] found id: ""
	I1222 01:53:21.272530 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.272538 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:21.272545 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:21.272603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:21.298865 2149446 cri.go:96] found id: ""
	I1222 01:53:21.298930 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.298955 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:21.298969 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:21.298981 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:21.375967 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:21.367288   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.368097   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.369766   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.370303   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.371978   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:21.367288   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.368097   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.369766   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.370303   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.371978   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:21.375990 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:21.376003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:21.407155 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:21.407194 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:21.439948 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:21.439978 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:21.506644 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:21.506681 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:24.027394 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:24.038461 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:24.038535 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:24.065524 2149446 cri.go:96] found id: ""
	I1222 01:53:24.065549 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.065558 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:24.065564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:24.065631 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:24.096630 2149446 cri.go:96] found id: ""
	I1222 01:53:24.096660 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.096670 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:24.096677 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:24.096744 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:24.122428 2149446 cri.go:96] found id: ""
	I1222 01:53:24.122453 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.122462 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:24.122468 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:24.122533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:24.147612 2149446 cri.go:96] found id: ""
	I1222 01:53:24.147634 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.147642 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:24.147649 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:24.147713 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:24.176841 2149446 cri.go:96] found id: ""
	I1222 01:53:24.176913 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.176933 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:24.176954 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:24.177042 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:24.206047 2149446 cri.go:96] found id: ""
	I1222 01:53:24.206073 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.206082 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:24.206089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:24.206148 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:24.232264 2149446 cri.go:96] found id: ""
	I1222 01:53:24.232292 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.232301 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:24.232307 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:24.232376 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:24.258596 2149446 cri.go:96] found id: ""
	I1222 01:53:24.258624 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.258633 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:24.258642 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:24.258655 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:24.286162 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:24.286196 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:24.314664 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:24.314690 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:24.384317 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:24.384405 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:24.404914 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:24.404992 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:24.478461 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:24.470772   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.471164   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472646   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472964   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.474221   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:24.470772   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.471164   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472646   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472964   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.474221   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:26.978694 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:26.989065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:26.989132 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:27.017546 2149446 cri.go:96] found id: ""
	I1222 01:53:27.017577 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.017586 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:27.017593 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:27.017662 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:27.047826 2149446 cri.go:96] found id: ""
	I1222 01:53:27.047855 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.047863 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:27.047871 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:27.047931 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:27.073912 2149446 cri.go:96] found id: ""
	I1222 01:53:27.073940 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.073952 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:27.073958 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:27.074018 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:27.098951 2149446 cri.go:96] found id: ""
	I1222 01:53:27.098974 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.098982 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:27.098988 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:27.099047 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:27.124715 2149446 cri.go:96] found id: ""
	I1222 01:53:27.124739 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.124748 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:27.124754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:27.124817 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:27.150398 2149446 cri.go:96] found id: ""
	I1222 01:53:27.150420 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.150431 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:27.150437 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:27.150502 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:27.176315 2149446 cri.go:96] found id: ""
	I1222 01:53:27.176347 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.176356 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:27.176386 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:27.176493 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:27.203872 2149446 cri.go:96] found id: ""
	I1222 01:53:27.203953 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.203977 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:27.204000 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:27.204034 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:27.237039 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:27.237067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:27.301398 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:27.301432 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:27.317630 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:27.317657 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:27.393864 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:27.383878   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.385765   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.387565   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.388163   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.389778   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:27.383878   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.385765   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.387565   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.388163   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.389778   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:27.393934 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:27.393960 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:29.927573 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:29.938417 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:29.938524 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:29.964036 2149446 cri.go:96] found id: ""
	I1222 01:53:29.964065 2149446 logs.go:282] 0 containers: []
	W1222 01:53:29.964073 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:29.964080 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:29.964138 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:29.988604 2149446 cri.go:96] found id: ""
	I1222 01:53:29.988632 2149446 logs.go:282] 0 containers: []
	W1222 01:53:29.988641 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:29.988647 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:29.988708 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:30.026459 2149446 cri.go:96] found id: ""
	I1222 01:53:30.026487 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.026497 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:30.026504 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:30.026573 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:30.057912 2149446 cri.go:96] found id: ""
	I1222 01:53:30.057939 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.057949 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:30.057955 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:30.058030 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:30.083599 2149446 cri.go:96] found id: ""
	I1222 01:53:30.083668 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.083690 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:30.083709 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:30.083806 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:30.110551 2149446 cri.go:96] found id: ""
	I1222 01:53:30.110620 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.110641 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:30.110655 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:30.110735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:30.137174 2149446 cri.go:96] found id: ""
	I1222 01:53:30.137203 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.137211 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:30.137218 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:30.137277 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:30.162716 2149446 cri.go:96] found id: ""
	I1222 01:53:30.162753 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.162763 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:30.162772 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:30.162791 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:30.227759 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:30.227794 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:30.245443 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:30.245482 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:30.307769 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:30.299415   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.299980   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301457   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301852   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.303320   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:30.299415   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.299980   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301457   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301852   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.303320   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:30.307792 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:30.307803 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:30.335176 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:30.335210 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:32.871091 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:32.881783 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:32.881907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:32.916036 2149446 cri.go:96] found id: ""
	I1222 01:53:32.916106 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.916128 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:32.916147 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:32.916230 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:32.943595 2149446 cri.go:96] found id: ""
	I1222 01:53:32.943619 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.943627 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:32.943634 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:32.943692 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:32.968836 2149446 cri.go:96] found id: ""
	I1222 01:53:32.968860 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.968868 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:32.968874 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:32.968956 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:32.997882 2149446 cri.go:96] found id: ""
	I1222 01:53:32.997950 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.997973 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:32.997985 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:32.998068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:33.026110 2149446 cri.go:96] found id: ""
	I1222 01:53:33.026137 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.026145 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:33.026152 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:33.026229 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:33.055281 2149446 cri.go:96] found id: ""
	I1222 01:53:33.055306 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.055315 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:33.055321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:33.055398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:33.086387 2149446 cri.go:96] found id: ""
	I1222 01:53:33.086465 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.086486 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:33.086499 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:33.086576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:33.112687 2149446 cri.go:96] found id: ""
	I1222 01:53:33.112730 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.112740 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:33.112749 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:33.112760 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:33.181024 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:33.181067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:33.198158 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:33.198190 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:33.273247 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:33.265089   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.265940   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.267575   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.268051   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.269666   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:33.265089   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.265940   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.267575   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.268051   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.269666   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:33.273341 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:33.273378 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:33.301159 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:33.301190 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:35.836720 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:35.847980 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:35.848054 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:35.879849 2149446 cri.go:96] found id: ""
	I1222 01:53:35.879879 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.879888 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:35.879894 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:35.879956 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:35.915400 2149446 cri.go:96] found id: ""
	I1222 01:53:35.915426 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.915434 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:35.915441 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:35.915511 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:35.940054 2149446 cri.go:96] found id: ""
	I1222 01:53:35.940078 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.940086 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:35.940093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:35.940153 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:35.970602 2149446 cri.go:96] found id: ""
	I1222 01:53:35.970625 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.970634 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:35.970641 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:35.970699 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:36.000035 2149446 cri.go:96] found id: ""
	I1222 01:53:36.000060 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.000069 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:36.000076 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:36.000146 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:36.030030 2149446 cri.go:96] found id: ""
	I1222 01:53:36.030062 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.030072 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:36.030085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:36.030160 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:36.056509 2149446 cri.go:96] found id: ""
	I1222 01:53:36.056535 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.056547 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:36.056554 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:36.056615 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:36.083227 2149446 cri.go:96] found id: ""
	I1222 01:53:36.083254 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.083262 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:36.083271 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:36.083282 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:36.112082 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:36.112120 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:36.144537 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:36.144568 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:36.209908 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:36.209989 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:36.228268 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:36.228349 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:36.290580 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:36.282527   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.283074   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284509   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284836   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.286252   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:36.282527   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.283074   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284509   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284836   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.286252   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:38.791376 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:38.803195 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:38.803268 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:38.827957 2149446 cri.go:96] found id: ""
	I1222 01:53:38.827983 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.827991 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:38.827997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:38.828059 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:38.858193 2149446 cri.go:96] found id: ""
	I1222 01:53:38.858219 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.858232 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:38.858238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:38.858300 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:38.892136 2149446 cri.go:96] found id: ""
	I1222 01:53:38.892168 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.892177 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:38.892184 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:38.892262 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:38.921915 2149446 cri.go:96] found id: ""
	I1222 01:53:38.921984 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.921999 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:38.922008 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:38.922067 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:38.947824 2149446 cri.go:96] found id: ""
	I1222 01:53:38.947847 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.947855 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:38.947862 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:38.947921 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:38.972933 2149446 cri.go:96] found id: ""
	I1222 01:53:38.972959 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.972967 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:38.972974 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:38.973039 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:39.007055 2149446 cri.go:96] found id: ""
	I1222 01:53:39.007083 2149446 logs.go:282] 0 containers: []
	W1222 01:53:39.007092 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:39.007099 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:39.007162 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:39.033180 2149446 cri.go:96] found id: ""
	I1222 01:53:39.033210 2149446 logs.go:282] 0 containers: []
	W1222 01:53:39.033219 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:39.033228 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:39.033240 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:39.098988 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:39.099022 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:39.117169 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:39.117195 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:39.179213 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:39.171486   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.172049   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.173620   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.174041   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.175480   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:39.171486   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.172049   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.173620   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.174041   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.175480   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:39.179238 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:39.179250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:39.207330 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:39.207363 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:41.736154 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:41.748704 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:41.748779 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:41.774614 2149446 cri.go:96] found id: ""
	I1222 01:53:41.774640 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.774658 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:41.774668 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:41.774726 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:41.804623 2149446 cri.go:96] found id: ""
	I1222 01:53:41.804649 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.804658 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:41.804665 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:41.804723 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:41.833829 2149446 cri.go:96] found id: ""
	I1222 01:53:41.833856 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.833865 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:41.833872 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:41.833936 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:41.875465 2149446 cri.go:96] found id: ""
	I1222 01:53:41.875492 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.875500 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:41.875507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:41.875571 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:41.905442 2149446 cri.go:96] found id: ""
	I1222 01:53:41.905467 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.905476 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:41.905482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:41.905570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:41.932195 2149446 cri.go:96] found id: ""
	I1222 01:53:41.932222 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.932231 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:41.932238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:41.932323 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:41.958757 2149446 cri.go:96] found id: ""
	I1222 01:53:41.958781 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.958795 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:41.958801 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:41.958884 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:41.984634 2149446 cri.go:96] found id: ""
	I1222 01:53:41.984658 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.984666 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:41.984683 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:41.984711 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:42.012813 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:42.012851 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:42.047278 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:42.047308 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:42.115243 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:42.115288 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:42.152682 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:42.152891 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:42.276937 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:42.267520   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.268654   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.269846   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.270701   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.271864   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:42.267520   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.268654   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.269846   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.270701   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.271864   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:44.778660 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:44.788968 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:44.789038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:44.814474 2149446 cri.go:96] found id: ""
	I1222 01:53:44.814498 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.814507 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:44.814514 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:44.814572 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:44.838957 2149446 cri.go:96] found id: ""
	I1222 01:53:44.838981 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.838990 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:44.838997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:44.839055 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:44.871015 2149446 cri.go:96] found id: ""
	I1222 01:53:44.871040 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.871048 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:44.871055 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:44.871126 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:44.901830 2149446 cri.go:96] found id: ""
	I1222 01:53:44.901852 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.901860 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:44.901866 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:44.901926 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:44.932837 2149446 cri.go:96] found id: ""
	I1222 01:53:44.932912 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.932936 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:44.932954 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:44.933034 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:44.958646 2149446 cri.go:96] found id: ""
	I1222 01:53:44.958670 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.958679 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:44.958685 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:44.958745 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:44.990447 2149446 cri.go:96] found id: ""
	I1222 01:53:44.990470 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.990479 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:44.990485 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:44.990551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:45.030189 2149446 cri.go:96] found id: ""
	I1222 01:53:45.030216 2149446 logs.go:282] 0 containers: []
	W1222 01:53:45.030226 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:45.030236 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:45.030250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:45.105196 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:45.105243 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:45.127602 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:45.127641 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:45.224662 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:45.212653   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.213852   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.215129   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.216095   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.218056   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:45.212653   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.213852   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.215129   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.216095   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.218056   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:45.224690 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:45.224706 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:45.294896 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:45.294933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:47.831558 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:47.843230 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:47.843313 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:47.888291 2149446 cri.go:96] found id: ""
	I1222 01:53:47.888314 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.888323 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:47.888329 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:47.888398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:47.921941 2149446 cri.go:96] found id: ""
	I1222 01:53:47.921965 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.921973 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:47.921979 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:47.922038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:47.948316 2149446 cri.go:96] found id: ""
	I1222 01:53:47.948349 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.948358 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:47.948373 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:47.948449 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:47.975804 2149446 cri.go:96] found id: ""
	I1222 01:53:47.975882 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.975898 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:47.975905 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:47.975969 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:48.017770 2149446 cri.go:96] found id: ""
	I1222 01:53:48.017798 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.017807 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:48.017814 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:48.017876 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:48.045819 2149446 cri.go:96] found id: ""
	I1222 01:53:48.045844 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.045853 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:48.045862 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:48.045924 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:48.072420 2149446 cri.go:96] found id: ""
	I1222 01:53:48.072446 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.072454 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:48.072467 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:48.072530 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:48.098359 2149446 cri.go:96] found id: ""
	I1222 01:53:48.098384 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.098392 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:48.098401 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:48.098443 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:48.115484 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:48.115515 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:48.182899 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:48.174594   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.175246   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.176834   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.177351   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.178997   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:48.174594   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.175246   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.176834   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.177351   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.178997   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:48.182970 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:48.182991 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:48.211848 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:48.211883 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:48.240534 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:48.240647 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:50.807734 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:50.819902 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:50.819972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:50.856953 2149446 cri.go:96] found id: ""
	I1222 01:53:50.856980 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.856989 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:50.856995 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:50.857057 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:50.893636 2149446 cri.go:96] found id: ""
	I1222 01:53:50.893665 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.893675 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:50.893681 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:50.893742 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:50.923962 2149446 cri.go:96] found id: ""
	I1222 01:53:50.923991 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.924000 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:50.924006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:50.924066 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:50.949904 2149446 cri.go:96] found id: ""
	I1222 01:53:50.949931 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.949940 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:50.949946 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:50.950007 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:50.979592 2149446 cri.go:96] found id: ""
	I1222 01:53:50.979619 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.979627 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:50.979633 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:50.979696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:51.006796 2149446 cri.go:96] found id: ""
	I1222 01:53:51.006823 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.006831 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:51.006838 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:51.006907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:51.036723 2149446 cri.go:96] found id: ""
	I1222 01:53:51.036750 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.036758 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:51.036765 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:51.036826 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:51.066606 2149446 cri.go:96] found id: ""
	I1222 01:53:51.066630 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.066639 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:51.066648 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:51.066661 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:51.132268 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:51.132306 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:51.150033 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:51.150064 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:51.221006 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:51.211594   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.212356   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.214275   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215136   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215987   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:51.211594   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.212356   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.214275   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215136   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215987   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:51.221029 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:51.221042 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:51.249405 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:51.249437 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:53.781794 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:53.792491 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:53.792562 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:53.821565 2149446 cri.go:96] found id: ""
	I1222 01:53:53.821592 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.821601 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:53.821607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:53.821664 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:53.858004 2149446 cri.go:96] found id: ""
	I1222 01:53:53.858026 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.858035 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:53.858042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:53.858102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:53.884816 2149446 cri.go:96] found id: ""
	I1222 01:53:53.884840 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.884848 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:53.884855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:53.884917 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:53.910690 2149446 cri.go:96] found id: ""
	I1222 01:53:53.910763 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.910785 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:53.910806 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:53.910896 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:53.938355 2149446 cri.go:96] found id: ""
	I1222 01:53:53.938382 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.938392 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:53.938398 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:53.938484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:53.968188 2149446 cri.go:96] found id: ""
	I1222 01:53:53.968220 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.968229 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:53.968236 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:53.968344 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:53.997318 2149446 cri.go:96] found id: ""
	I1222 01:53:53.997373 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.997382 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:53.997388 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:53.997462 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:54.025412 2149446 cri.go:96] found id: ""
	I1222 01:53:54.025514 2149446 logs.go:282] 0 containers: []
	W1222 01:53:54.025538 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:54.025561 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:54.025608 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:54.093935 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:54.085512   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.086255   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.087980   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.088420   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.090003   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:54.085512   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.086255   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.087980   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.088420   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.090003   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:54.093957 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:54.093969 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:54.123758 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:54.123795 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:54.153879 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:54.153908 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:54.220347 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:54.220396 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:56.737648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:56.748600 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:56.748674 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:56.774130 2149446 cri.go:96] found id: ""
	I1222 01:53:56.774156 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.774164 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:56.774171 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:56.774232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:56.800427 2149446 cri.go:96] found id: ""
	I1222 01:53:56.800458 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.800466 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:56.800472 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:56.800533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:56.825296 2149446 cri.go:96] found id: ""
	I1222 01:53:56.825341 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.825352 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:56.825359 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:56.825417 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:56.859965 2149446 cri.go:96] found id: ""
	I1222 01:53:56.859992 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.860000 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:56.860007 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:56.860070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:56.893738 2149446 cri.go:96] found id: ""
	I1222 01:53:56.893764 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.893773 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:56.893779 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:56.893840 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:56.922780 2149446 cri.go:96] found id: ""
	I1222 01:53:56.922805 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.922814 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:56.922820 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:56.922882 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:56.951835 2149446 cri.go:96] found id: ""
	I1222 01:53:56.951865 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.951874 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:56.951881 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:56.951942 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:56.982753 2149446 cri.go:96] found id: ""
	I1222 01:53:56.982788 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.982802 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:56.982832 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:56.982859 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:57.054734 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:57.054771 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:57.071366 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:57.071447 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:57.139693 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:57.131885   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.132329   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134013   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134495   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.135751   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:57.131885   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.132329   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134013   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134495   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.135751   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:57.139715 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:57.139728 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:57.168869 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:57.168907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:59.701462 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:59.712252 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:59.712324 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:59.737831 2149446 cri.go:96] found id: ""
	I1222 01:53:59.737856 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.737865 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:59.737872 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:59.737935 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:59.765066 2149446 cri.go:96] found id: ""
	I1222 01:53:59.765090 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.765099 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:59.765105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:59.765165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:59.789855 2149446 cri.go:96] found id: ""
	I1222 01:53:59.789878 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.789887 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:59.789893 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:59.789965 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:59.815579 2149446 cri.go:96] found id: ""
	I1222 01:53:59.815601 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.815609 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:59.815616 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:59.815676 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:59.840029 2149446 cri.go:96] found id: ""
	I1222 01:53:59.840051 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.840059 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:59.840066 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:59.840127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:59.870757 2149446 cri.go:96] found id: ""
	I1222 01:53:59.870790 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.870804 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:59.870815 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:59.870883 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:59.899916 2149446 cri.go:96] found id: ""
	I1222 01:53:59.899941 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.899954 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:59.899962 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:59.900029 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:59.927188 2149446 cri.go:96] found id: ""
	I1222 01:53:59.927221 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.927230 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:59.927239 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:59.927250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:59.957210 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:59.957247 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:59.988485 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:59.988515 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:00.082944 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:00.082993 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:00.133491 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:00.133527 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:00.297208 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:00.279334   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.280543   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.281714   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.282703   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.290162   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:00.279334   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.280543   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.281714   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.282703   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.290162   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:02.797491 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:02.810137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:02.810229 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:02.840273 2149446 cri.go:96] found id: ""
	I1222 01:54:02.840297 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.840305 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:02.840312 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:02.840420 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:02.875715 2149446 cri.go:96] found id: ""
	I1222 01:54:02.875739 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.875747 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:02.875754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:02.875813 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:02.919822 2149446 cri.go:96] found id: ""
	I1222 01:54:02.919846 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.919854 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:02.919860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:02.919919 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:02.948124 2149446 cri.go:96] found id: ""
	I1222 01:54:02.948146 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.948154 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:02.948160 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:02.948219 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:02.974340 2149446 cri.go:96] found id: ""
	I1222 01:54:02.974363 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.974372 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:02.974378 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:02.974438 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:02.999271 2149446 cri.go:96] found id: ""
	I1222 01:54:02.999296 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.999305 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:02.999312 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:02.999385 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:03.030970 2149446 cri.go:96] found id: ""
	I1222 01:54:03.030993 2149446 logs.go:282] 0 containers: []
	W1222 01:54:03.031002 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:03.031008 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:03.031068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:03.056002 2149446 cri.go:96] found id: ""
	I1222 01:54:03.056029 2149446 logs.go:282] 0 containers: []
	W1222 01:54:03.056038 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:03.056047 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:03.056058 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:03.125375 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:03.125411 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:03.145686 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:03.145716 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:03.218509 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:03.209113   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.210343   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.211031   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.212871   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.213440   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:03.209113   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.210343   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.211031   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.212871   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.213440   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:03.218530 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:03.218542 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:03.248642 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:03.248677 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:05.780702 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:05.791068 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:05.791146 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:05.816364 2149446 cri.go:96] found id: ""
	I1222 01:54:05.816438 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.816460 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:05.816477 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:05.816563 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:05.841188 2149446 cri.go:96] found id: ""
	I1222 01:54:05.841211 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.841220 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:05.841226 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:05.841288 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:05.881850 2149446 cri.go:96] found id: ""
	I1222 01:54:05.881872 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.881881 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:05.881887 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:05.881948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:05.916551 2149446 cri.go:96] found id: ""
	I1222 01:54:05.916573 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.916582 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:05.916589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:05.916651 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:05.943199 2149446 cri.go:96] found id: ""
	I1222 01:54:05.943226 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.943234 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:05.943240 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:05.943305 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:05.970571 2149446 cri.go:96] found id: ""
	I1222 01:54:05.970596 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.970605 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:05.970611 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:05.970671 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:05.995460 2149446 cri.go:96] found id: ""
	I1222 01:54:05.995526 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.995547 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:05.995567 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:05.995644 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:06.023405 2149446 cri.go:96] found id: ""
	I1222 01:54:06.023433 2149446 logs.go:282] 0 containers: []
	W1222 01:54:06.023442 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:06.023451 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:06.023462 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:06.052389 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:06.052429 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:06.081830 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:06.081857 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:06.148270 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:06.148310 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:06.164614 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:06.164642 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:06.229843 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:06.221234   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.222106   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.223684   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.224107   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.225591   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:06.221234   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.222106   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.223684   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.224107   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.225591   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:08.730857 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:08.741318 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:08.741413 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:08.767143 2149446 cri.go:96] found id: ""
	I1222 01:54:08.767169 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.767182 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:08.767190 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:08.767248 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:08.791625 2149446 cri.go:96] found id: ""
	I1222 01:54:08.791658 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.791667 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:08.791673 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:08.791739 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:08.816602 2149446 cri.go:96] found id: ""
	I1222 01:54:08.816636 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.816645 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:08.816652 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:08.816718 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:08.844185 2149446 cri.go:96] found id: ""
	I1222 01:54:08.844261 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.844283 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:08.844302 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:08.844404 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:08.883312 2149446 cri.go:96] found id: ""
	I1222 01:54:08.883384 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.883408 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:08.883428 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:08.883517 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:08.911421 2149446 cri.go:96] found id: ""
	I1222 01:54:08.911457 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.911466 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:08.911473 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:08.911538 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:08.942750 2149446 cri.go:96] found id: ""
	I1222 01:54:08.942775 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.942784 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:08.942791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:08.942848 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:08.967789 2149446 cri.go:96] found id: ""
	I1222 01:54:08.967813 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.967822 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:08.967831 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:08.967842 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:09.033639 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:09.033675 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:09.052146 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:09.052175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:09.116330 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:09.108758   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.109130   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110675   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110988   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.112489   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:09.108758   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.109130   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110675   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110988   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.112489   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:09.116356 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:09.116368 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:09.144973 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:09.145004 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:11.673475 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:11.684547 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:11.684626 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:11.710519 2149446 cri.go:96] found id: ""
	I1222 01:54:11.710548 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.710557 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:11.710564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:11.710626 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:11.735548 2149446 cri.go:96] found id: ""
	I1222 01:54:11.735572 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.735582 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:11.735589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:11.735654 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:11.760929 2149446 cri.go:96] found id: ""
	I1222 01:54:11.760954 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.760963 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:11.760969 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:11.761033 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:11.787093 2149446 cri.go:96] found id: ""
	I1222 01:54:11.787120 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.787128 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:11.787135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:11.787193 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:11.812911 2149446 cri.go:96] found id: ""
	I1222 01:54:11.812936 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.812945 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:11.812951 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:11.813015 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:11.838482 2149446 cri.go:96] found id: ""
	I1222 01:54:11.838505 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.838514 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:11.838520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:11.838583 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:11.866259 2149446 cri.go:96] found id: ""
	I1222 01:54:11.866286 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.866306 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:11.866313 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:11.866384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:11.893985 2149446 cri.go:96] found id: ""
	I1222 01:54:11.894008 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.894016 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:11.894025 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:11.894037 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:11.913582 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:11.913611 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:11.981873 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:11.974736   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.975183   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.976706   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.977106   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.978227   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:11.974736   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.975183   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.976706   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.977106   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.978227   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:11.981895 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:11.981907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:12.010106 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:12.010148 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:12.040439 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:12.040469 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:14.613993 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:14.626494 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:14.626566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:14.659356 2149446 cri.go:96] found id: ""
	I1222 01:54:14.659378 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.659386 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:14.659392 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:14.659453 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:14.686289 2149446 cri.go:96] found id: ""
	I1222 01:54:14.686311 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.686319 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:14.686326 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:14.686382 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:14.711394 2149446 cri.go:96] found id: ""
	I1222 01:54:14.711422 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.711432 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:14.711438 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:14.711497 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:14.739481 2149446 cri.go:96] found id: ""
	I1222 01:54:14.739551 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.739573 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:14.739587 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:14.739660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:14.768068 2149446 cri.go:96] found id: ""
	I1222 01:54:14.768090 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.768098 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:14.768105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:14.768164 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:14.794022 2149446 cri.go:96] found id: ""
	I1222 01:54:14.794045 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.794054 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:14.794061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:14.794122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:14.826016 2149446 cri.go:96] found id: ""
	I1222 01:54:14.826083 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.826098 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:14.826105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:14.826165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:14.862200 2149446 cri.go:96] found id: ""
	I1222 01:54:14.862227 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.862238 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:14.862247 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:14.862266 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:14.936812 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:14.936849 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:14.955319 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:14.955350 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:15.031542 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:15.022418   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.023307   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025087   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025579   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.027225   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:15.022418   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.023307   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025087   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025579   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.027225   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:15.031567 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:15.031581 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:15.060447 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:15.060486 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:17.589430 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:17.599883 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:17.600008 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:17.629312 2149446 cri.go:96] found id: ""
	I1222 01:54:17.629354 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.629363 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:17.629370 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:17.629430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:17.666013 2149446 cri.go:96] found id: ""
	I1222 01:54:17.666039 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.666048 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:17.666054 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:17.666114 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:17.690505 2149446 cri.go:96] found id: ""
	I1222 01:54:17.690528 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.690536 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:17.690543 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:17.690603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:17.716599 2149446 cri.go:96] found id: ""
	I1222 01:54:17.716623 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.716631 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:17.716647 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:17.716706 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:17.741489 2149446 cri.go:96] found id: ""
	I1222 01:54:17.741513 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.741522 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:17.741528 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:17.741587 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:17.770157 2149446 cri.go:96] found id: ""
	I1222 01:54:17.770183 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.770192 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:17.770198 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:17.770286 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:17.794804 2149446 cri.go:96] found id: ""
	I1222 01:54:17.794828 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.794843 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:17.794850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:17.794912 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:17.820386 2149446 cri.go:96] found id: ""
	I1222 01:54:17.820411 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.820421 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:17.820429 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:17.820441 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:17.886892 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:17.886927 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:17.906329 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:17.906358 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:17.968971 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:17.961422   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.961912   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963366   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963698   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.965104   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:17.961422   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.961912   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963366   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963698   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.965104   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:17.968994 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:17.969007 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:17.997165 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:17.997196 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:20.530538 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:20.541178 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:20.541247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:20.567334 2149446 cri.go:96] found id: ""
	I1222 01:54:20.567358 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.567367 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:20.567373 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:20.567433 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:20.594495 2149446 cri.go:96] found id: ""
	I1222 01:54:20.594519 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.594528 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:20.594534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:20.594592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:20.627810 2149446 cri.go:96] found id: ""
	I1222 01:54:20.627837 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.627846 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:20.627853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:20.627913 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:20.659100 2149446 cri.go:96] found id: ""
	I1222 01:54:20.659127 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.659136 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:20.659142 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:20.659203 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:20.684059 2149446 cri.go:96] found id: ""
	I1222 01:54:20.684083 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.684092 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:20.684099 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:20.684158 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:20.708264 2149446 cri.go:96] found id: ""
	I1222 01:54:20.708289 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.708298 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:20.708305 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:20.708373 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:20.732615 2149446 cri.go:96] found id: ""
	I1222 01:54:20.732640 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.732649 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:20.732656 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:20.732718 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:20.764171 2149446 cri.go:96] found id: ""
	I1222 01:54:20.764198 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.764207 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:20.764215 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:20.764228 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:20.780713 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:20.780742 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:20.851938 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:20.843189   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.843837   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.845556   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.846092   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.847691   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:20.843189   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.843837   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.845556   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.846092   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.847691   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:20.851961 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:20.851974 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:20.893661 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:20.893699 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:20.931494 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:20.931521 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:23.498941 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:23.509907 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:23.509983 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:23.539967 2149446 cri.go:96] found id: ""
	I1222 01:54:23.539997 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.540005 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:23.540012 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:23.540071 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:23.569856 2149446 cri.go:96] found id: ""
	I1222 01:54:23.569884 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.569894 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:23.569900 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:23.569961 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:23.604014 2149446 cri.go:96] found id: ""
	I1222 01:54:23.604082 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.604103 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:23.604124 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:23.604217 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:23.632481 2149446 cri.go:96] found id: ""
	I1222 01:54:23.632510 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.632519 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:23.632525 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:23.632586 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:23.663814 2149446 cri.go:96] found id: ""
	I1222 01:54:23.663838 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.663847 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:23.663853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:23.663916 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:23.693790 2149446 cri.go:96] found id: ""
	I1222 01:54:23.693815 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.693824 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:23.693831 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:23.693891 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:23.722928 2149446 cri.go:96] found id: ""
	I1222 01:54:23.722995 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.723009 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:23.723017 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:23.723077 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:23.749108 2149446 cri.go:96] found id: ""
	I1222 01:54:23.749133 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.749142 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:23.749151 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:23.749162 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:23.814472 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:23.814506 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:23.831927 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:23.831957 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:23.920909 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:23.912937   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.913679   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915294   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915601   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.917055   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:23.912937   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.913679   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915294   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915601   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.917055   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:23.920931 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:23.920943 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:23.949424 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:23.949460 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:26.476761 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:26.487732 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:26.487807 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:26.517071 2149446 cri.go:96] found id: ""
	I1222 01:54:26.517147 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.517162 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:26.517170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:26.517245 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:26.542078 2149446 cri.go:96] found id: ""
	I1222 01:54:26.542101 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.542110 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:26.542116 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:26.542177 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:26.567743 2149446 cri.go:96] found id: ""
	I1222 01:54:26.567767 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.567775 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:26.567782 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:26.567843 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:26.595280 2149446 cri.go:96] found id: ""
	I1222 01:54:26.595307 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.595316 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:26.595322 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:26.595382 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:26.624206 2149446 cri.go:96] found id: ""
	I1222 01:54:26.624232 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.624241 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:26.624248 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:26.624308 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:26.658482 2149446 cri.go:96] found id: ""
	I1222 01:54:26.658508 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.658516 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:26.658523 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:26.658581 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:26.686129 2149446 cri.go:96] found id: ""
	I1222 01:54:26.686152 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.686161 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:26.686168 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:26.686249 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:26.711709 2149446 cri.go:96] found id: ""
	I1222 01:54:26.711736 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.711745 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:26.711754 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:26.711764 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:26.778620 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:26.778656 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:26.795417 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:26.795446 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:26.867101 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:26.857306   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.858065   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.859736   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.860499   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.862034   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:26.857306   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.858065   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.859736   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.860499   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.862034   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:26.867125 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:26.867141 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:26.902503 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:26.902544 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:29.434518 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:29.445170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:29.445239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:29.470088 2149446 cri.go:96] found id: ""
	I1222 01:54:29.470111 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.470120 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:29.470126 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:29.470190 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:29.494531 2149446 cri.go:96] found id: ""
	I1222 01:54:29.494553 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.494562 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:29.494568 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:29.494627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:29.519108 2149446 cri.go:96] found id: ""
	I1222 01:54:29.519131 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.519139 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:29.519146 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:29.519212 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:29.544138 2149446 cri.go:96] found id: ""
	I1222 01:54:29.544162 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.544170 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:29.544176 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:29.544237 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:29.569957 2149446 cri.go:96] found id: ""
	I1222 01:54:29.569982 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.569991 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:29.569999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:29.570068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:29.601725 2149446 cri.go:96] found id: ""
	I1222 01:54:29.601748 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.601756 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:29.601763 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:29.601826 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:29.631880 2149446 cri.go:96] found id: ""
	I1222 01:54:29.631904 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.631912 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:29.631918 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:29.631981 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:29.664642 2149446 cri.go:96] found id: ""
	I1222 01:54:29.664669 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.664678 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:29.664686 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:29.664698 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:29.732517 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:29.732561 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:29.751836 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:29.751868 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:29.825936 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:29.817178   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.817918   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.819621   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.820393   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.822050   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:29.817178   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.817918   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.819621   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.820393   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.822050   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:29.825958 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:29.825971 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:29.860213 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:29.860248 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:32.399119 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:32.410321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:32.410395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:32.436535 2149446 cri.go:96] found id: ""
	I1222 01:54:32.436567 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.436576 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:32.436589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:32.436667 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:32.462497 2149446 cri.go:96] found id: ""
	I1222 01:54:32.462522 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.462531 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:32.462538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:32.462598 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:32.488532 2149446 cri.go:96] found id: ""
	I1222 01:54:32.488607 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.488631 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:32.488651 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:32.488727 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:32.513915 2149446 cri.go:96] found id: ""
	I1222 01:54:32.513939 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.513956 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:32.513980 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:32.514061 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:32.541025 2149446 cri.go:96] found id: ""
	I1222 01:54:32.541050 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.541060 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:32.541067 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:32.541130 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:32.568517 2149446 cri.go:96] found id: ""
	I1222 01:54:32.568544 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.568553 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:32.568561 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:32.568633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:32.596417 2149446 cri.go:96] found id: ""
	I1222 01:54:32.596444 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.596453 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:32.596459 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:32.596522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:32.629445 2149446 cri.go:96] found id: ""
	I1222 01:54:32.629473 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.629482 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:32.629491 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:32.629504 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:32.700379 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:32.700415 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:32.717228 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:32.717259 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:32.783507 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:32.774480   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.775245   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777042   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.779225   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:32.774480   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.775245   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777042   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.779225   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:32.783529 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:32.783541 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:32.812359 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:32.812415 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:35.344384 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:35.357066 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:35.357137 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:35.387791 2149446 cri.go:96] found id: ""
	I1222 01:54:35.387821 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.387832 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:35.387839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:35.387907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:35.413270 2149446 cri.go:96] found id: ""
	I1222 01:54:35.413294 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.413303 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:35.413309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:35.413385 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:35.438755 2149446 cri.go:96] found id: ""
	I1222 01:54:35.438783 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.438791 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:35.438798 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:35.438859 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:35.464735 2149446 cri.go:96] found id: ""
	I1222 01:54:35.464761 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.464771 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:35.464778 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:35.464838 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:35.490615 2149446 cri.go:96] found id: ""
	I1222 01:54:35.490639 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.490647 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:35.490654 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:35.490722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:35.520602 2149446 cri.go:96] found id: ""
	I1222 01:54:35.520628 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.520637 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:35.520644 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:35.520706 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:35.545887 2149446 cri.go:96] found id: ""
	I1222 01:54:35.545911 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.545919 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:35.545928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:35.545989 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:35.571922 2149446 cri.go:96] found id: ""
	I1222 01:54:35.571945 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.571953 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:35.571962 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:35.571973 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:35.641377 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:35.641458 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:35.658922 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:35.659080 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:35.721523 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:35.714066   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.714428   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.715873   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.716191   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.717743   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:35.714066   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.714428   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.715873   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.716191   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.717743   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:35.721545 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:35.721560 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:35.749350 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:35.749380 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:38.281427 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:38.293044 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:38.293127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:38.321491 2149446 cri.go:96] found id: ""
	I1222 01:54:38.321519 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.321528 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:38.321534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:38.321593 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:38.346163 2149446 cri.go:96] found id: ""
	I1222 01:54:38.346238 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.346261 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:38.346279 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:38.346367 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:38.372025 2149446 cri.go:96] found id: ""
	I1222 01:54:38.372050 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.372058 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:38.372065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:38.372133 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:38.399739 2149446 cri.go:96] found id: ""
	I1222 01:54:38.399762 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.399770 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:38.399780 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:38.399844 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:38.424497 2149446 cri.go:96] found id: ""
	I1222 01:54:38.424565 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.424589 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:38.424608 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:38.424696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:38.449671 2149446 cri.go:96] found id: ""
	I1222 01:54:38.449699 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.449708 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:38.449714 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:38.449776 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:38.477607 2149446 cri.go:96] found id: ""
	I1222 01:54:38.477674 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.477697 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:38.477710 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:38.477785 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:38.506176 2149446 cri.go:96] found id: ""
	I1222 01:54:38.506204 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.506212 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:38.506220 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:38.506231 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:38.533790 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:38.533822 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:38.561674 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:38.561700 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:38.628458 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:38.628502 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:38.649997 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:38.650027 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:38.712931 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:38.704109   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.704905   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.706476   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.707004   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.708610   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:38.704109   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.704905   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.706476   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.707004   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.708610   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:41.213505 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:41.225530 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:41.225616 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:41.255991 2149446 cri.go:96] found id: ""
	I1222 01:54:41.256021 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.256029 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:41.256036 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:41.256094 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:41.282289 2149446 cri.go:96] found id: ""
	I1222 01:54:41.282314 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.282333 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:41.282339 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:41.282419 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:41.308460 2149446 cri.go:96] found id: ""
	I1222 01:54:41.308488 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.308499 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:41.308505 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:41.308566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:41.333516 2149446 cri.go:96] found id: ""
	I1222 01:54:41.333547 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.333555 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:41.333562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:41.333620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:41.359576 2149446 cri.go:96] found id: ""
	I1222 01:54:41.359598 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.359607 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:41.359614 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:41.359672 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:41.384042 2149446 cri.go:96] found id: ""
	I1222 01:54:41.384066 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.384075 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:41.384082 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:41.384142 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:41.413519 2149446 cri.go:96] found id: ""
	I1222 01:54:41.413549 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.413559 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:41.413565 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:41.413624 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:41.454139 2149446 cri.go:96] found id: ""
	I1222 01:54:41.454162 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.454171 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:41.454180 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:41.454191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:41.482322 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:41.482355 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:41.511950 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:41.511979 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:41.577845 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:41.577882 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:41.595747 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:41.595826 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:41.676481 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:41.667994   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.668497   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670197   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670551   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.672124   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:41.667994   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.668497   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670197   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670551   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.672124   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:44.178129 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:44.189537 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:44.189613 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:44.217535 2149446 cri.go:96] found id: ""
	I1222 01:54:44.217563 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.217572 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:44.217580 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:44.217646 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:44.243271 2149446 cri.go:96] found id: ""
	I1222 01:54:44.243299 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.243308 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:44.243315 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:44.243379 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:44.272096 2149446 cri.go:96] found id: ""
	I1222 01:54:44.272120 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.272129 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:44.272135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:44.272194 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:44.298955 2149446 cri.go:96] found id: ""
	I1222 01:54:44.298983 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.298993 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:44.298999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:44.299070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:44.324703 2149446 cri.go:96] found id: ""
	I1222 01:54:44.324728 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.324737 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:44.324743 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:44.324834 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:44.350284 2149446 cri.go:96] found id: ""
	I1222 01:54:44.350307 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.350316 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:44.350322 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:44.350384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:44.378349 2149446 cri.go:96] found id: ""
	I1222 01:54:44.378375 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.378384 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:44.378390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:44.378453 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:44.408220 2149446 cri.go:96] found id: ""
	I1222 01:54:44.408244 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.408258 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:44.408271 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:44.408283 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:44.474431 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:44.474465 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:44.491072 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:44.491099 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:44.556034 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:44.547925   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.548318   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.549960   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.550258   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.551701   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:44.547925   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.548318   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.549960   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.550258   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.551701   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:44.556055 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:44.556068 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:44.584394 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:44.584431 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:47.124887 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:47.137092 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:47.137169 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:47.164240 2149446 cri.go:96] found id: ""
	I1222 01:54:47.164268 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.164277 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:47.164284 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:47.164343 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:47.189698 2149446 cri.go:96] found id: ""
	I1222 01:54:47.189723 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.189732 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:47.189739 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:47.189799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:47.215248 2149446 cri.go:96] found id: ""
	I1222 01:54:47.215276 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.215285 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:47.215291 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:47.215350 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:47.241376 2149446 cri.go:96] found id: ""
	I1222 01:54:47.241401 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.241409 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:47.241416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:47.241475 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:47.266358 2149446 cri.go:96] found id: ""
	I1222 01:54:47.266383 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.266392 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:47.266400 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:47.266477 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:47.292142 2149446 cri.go:96] found id: ""
	I1222 01:54:47.292167 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.292175 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:47.292182 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:47.292260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:47.318478 2149446 cri.go:96] found id: ""
	I1222 01:54:47.318503 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.318511 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:47.318520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:47.318581 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:47.343233 2149446 cri.go:96] found id: ""
	I1222 01:54:47.343313 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.343328 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:47.343338 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:47.343351 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:47.360045 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:47.360074 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:47.426418 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:47.417312   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.418166   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.419870   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.420511   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.422229   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:47.417312   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.418166   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.419870   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.420511   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.422229   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:47.426442 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:47.426457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:47.455495 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:47.455529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:47.485526 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:47.485555 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:50.052484 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:50.066978 2149446 out.go:203] 
	W1222 01:54:50.069810 2149446 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1222 01:54:50.069848 2149446 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1222 01:54:50.069857 2149446 out.go:285] * Related issues:
	* Related issues:
	W1222 01:54:50.069872 2149446 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1222 01:54:50.069887 2149446 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1222 01:54:50.072764 2149446 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-704217
helpers_test.go:244: (dbg) docker inspect newest-cni-704217:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	        "Created": "2025-12-22T01:38:39.533406818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2149579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:48:40.537015605Z",
	            "FinishedAt": "2025-12-22T01:48:39.100562621Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hosts",
	        "LogPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e-json.log",
	        "Name": "/newest-cni-704217",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-704217:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-704217",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	                "LowerDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-704217",
	                "Source": "/var/lib/docker/volumes/newest-cni-704217/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-704217",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-704217",
	                "name.minikube.sigs.k8s.io": "newest-cni-704217",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14d50a92928bec4b8ab05dab0866e0253bd6e6099eb5f12680df80e7cd980f01",
	            "SandboxKey": "/var/run/docker/netns/14d50a92928b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38786"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38787"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38790"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38788"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38789"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-704217": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:54:eb:22:0d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37c5f2b876d48064d8ed7b8ba7f6e9a5833c911bae1041e13f11dbc7e56a4324",
	                    "EndpointID": "adf54697897a7d0e245c3181f4c38786594da1f48236d2a9134636be2465ea29",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-704217",
	                        "0d8468deabde"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (353.045686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-704217 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-704217 logs -n 25: (1.540118165s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │ 22 Dec 25 01:35 UTC │
	│ image   │ embed-certs-235567 image list --format=json                                                                                                                                                                                                        │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:41 UTC │                     │
	│ stop    │ -p no-preload-491054 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ addons  │ enable dashboard -p no-preload-491054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-704217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:46 UTC │                     │
	│ stop    │ -p newest-cni-704217 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │ 22 Dec 25 01:48 UTC │
	│ addons  │ enable dashboard -p newest-cni-704217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │ 22 Dec 25 01:48 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:48:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:48:40.242767 2149446 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:48:40.242975 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243012 2149446 out.go:374] Setting ErrFile to fd 2...
	I1222 01:48:40.243032 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243304 2149446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:48:40.243710 2149446 out.go:368] Setting JSON to false
	I1222 01:48:40.244718 2149446 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":117070,"bootTime":1766251051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:48:40.244822 2149446 start.go:143] virtualization:  
	I1222 01:48:40.247655 2149446 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1222 01:48:35.596802 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:37.597647 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:40.096931 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:40.251418 2149446 notify.go:221] Checking for updates...
	I1222 01:48:40.254130 2149446 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:48:40.257153 2149446 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:48:40.259997 2149446 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:40.262948 2149446 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:48:40.265828 2149446 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:48:40.268754 2149446 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:48:40.272145 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:40.272826 2149446 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:48:40.305841 2149446 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:48:40.306001 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.377826 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.367957332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.377936 2149446 docker.go:319] overlay module found
	I1222 01:48:40.381097 2149446 out.go:179] * Using the docker driver based on existing profile
	I1222 01:48:40.383847 2149446 start.go:309] selected driver: docker
	I1222 01:48:40.383862 2149446 start.go:928] validating driver "docker" against &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.383955 2149446 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:48:40.384655 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.450570 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.441705374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.450907 2149446 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:48:40.450934 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:40.450985 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:40.451021 2149446 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.454057 2149446 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:48:40.456938 2149446 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:48:40.459827 2149446 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:48:40.462746 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:40.462788 2149446 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:48:40.462800 2149446 cache.go:65] Caching tarball of preloaded images
	I1222 01:48:40.462841 2149446 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:48:40.462889 2149446 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:48:40.462900 2149446 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:48:40.463007 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.482525 2149446 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:48:40.482547 2149446 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:48:40.482567 2149446 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:48:40.482597 2149446 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:48:40.482656 2149446 start.go:364] duration metric: took 36.504µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:48:40.482682 2149446 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:48:40.482690 2149446 fix.go:54] fixHost starting: 
	I1222 01:48:40.482953 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.499420 2149446 fix.go:112] recreateIfNeeded on newest-cni-704217: state=Stopped err=<nil>
	W1222 01:48:40.499450 2149446 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:48:40.502622 2149446 out.go:252] * Restarting existing docker container for "newest-cni-704217" ...
	I1222 01:48:40.502704 2149446 cli_runner.go:164] Run: docker start newest-cni-704217
	I1222 01:48:40.772557 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.795810 2149446 kic.go:430] container "newest-cni-704217" state is running.
	I1222 01:48:40.796193 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:40.819335 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.819554 2149446 machine.go:94] provisionDockerMachine start ...
	I1222 01:48:40.819841 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:40.841916 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:40.842255 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:40.842265 2149446 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:48:40.844138 2149446 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:48:43.976885 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:43.976911 2149446 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:48:43.977014 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:43.994105 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:43.994420 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:43.994436 2149446 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:48:44.140605 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:44.140693 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.161016 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.161373 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.161391 2149446 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:48:44.293770 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:48:44.293859 2149446 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:48:44.293905 2149446 ubuntu.go:190] setting up certificates
	I1222 01:48:44.293936 2149446 provision.go:84] configureAuth start
	I1222 01:48:44.294013 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:44.316482 2149446 provision.go:143] copyHostCerts
	I1222 01:48:44.316547 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:48:44.316556 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:48:44.316633 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:48:44.316728 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:48:44.316733 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:48:44.316759 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:48:44.316806 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:48:44.316811 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:48:44.316833 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:48:44.316876 2149446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:48:44.455342 2149446 provision.go:177] copyRemoteCerts
	I1222 01:48:44.455417 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:48:44.455467 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.474519 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:44.573438 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:48:44.590713 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:48:44.609690 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:48:44.627314 2149446 provision.go:87] duration metric: took 333.343615ms to configureAuth
	I1222 01:48:44.627344 2149446 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:48:44.627539 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:44.627653 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.649833 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.650146 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.650167 2149446 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:48:44.926182 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:48:44.926204 2149446 machine.go:97] duration metric: took 4.106640691s to provisionDockerMachine
	I1222 01:48:44.926234 2149446 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:48:44.926251 2149446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:48:44.926311 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:48:44.926356 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.946595 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.045827 2149446 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:48:45.062421 2149446 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:48:45.062452 2149446 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:48:45.062466 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:48:45.062535 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:48:45.062625 2149446 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:48:45.062747 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:48:45.075486 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:45.105717 2149446 start.go:296] duration metric: took 179.465684ms for postStartSetup
	I1222 01:48:45.105860 2149446 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:48:45.105935 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.131206 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	W1222 01:48:42.097949 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:44.597158 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:45.246727 2149446 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:48:45.256769 2149446 fix.go:56] duration metric: took 4.774070481s for fixHost
	I1222 01:48:45.256799 2149446 start.go:83] releasing machines lock for "newest-cni-704217", held for 4.774128563s
	I1222 01:48:45.256887 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:45.279239 2149446 ssh_runner.go:195] Run: cat /version.json
	I1222 01:48:45.279292 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.279295 2149446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:48:45.279352 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.302892 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.315026 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.495990 2149446 ssh_runner.go:195] Run: systemctl --version
	I1222 01:48:45.502719 2149446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:48:45.539801 2149446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:48:45.545121 2149446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:48:45.545189 2149446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:48:45.553556 2149446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:48:45.553582 2149446 start.go:496] detecting cgroup driver to use...
	I1222 01:48:45.553612 2149446 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:48:45.553661 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:48:45.568706 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:48:45.581738 2149446 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:48:45.581818 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:48:45.597902 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:48:45.614693 2149446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:48:45.735512 2149446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:48:45.846918 2149446 docker.go:234] disabling docker service ...
	I1222 01:48:45.847000 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:48:45.861997 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:48:45.874967 2149446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:48:45.982721 2149446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:48:46.093378 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:48:46.106471 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:48:46.120288 2149446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:48:46.120369 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.130327 2149446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:48:46.130417 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.139300 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.148385 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.157463 2149446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:48:46.165914 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.175804 2149446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.185241 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.194290 2149446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:48:46.201953 2149446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:48:46.209542 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.327150 2149446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:48:46.504853 2149446 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:48:46.504948 2149446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:48:46.509123 2149446 start.go:564] Will wait 60s for crictl version
	I1222 01:48:46.509240 2149446 ssh_runner.go:195] Run: which crictl
	I1222 01:48:46.513227 2149446 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:48:46.539190 2149446 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:48:46.539329 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.570401 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.603161 2149446 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:48:46.606112 2149446 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:48:46.622587 2149446 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:48:46.626479 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.639159 2149446 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:48:46.642073 2149446 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:48:46.642257 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:46.642336 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.682401 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.682426 2149446 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:48:46.682483 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.712357 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.712379 2149446 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:48:46.712388 2149446 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:48:46.712483 2149446 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:48:46.712573 2149446 ssh_runner.go:195] Run: crio config
	I1222 01:48:46.797974 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:46.798006 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:46.798025 2149446 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:48:46.798055 2149446 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:48:46.798231 2149446 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:48:46.798309 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:48:46.806374 2149446 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:48:46.806461 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:48:46.814198 2149446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:48:46.828274 2149446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:48:46.841483 2149446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:48:46.854374 2149446 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:48:46.857976 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.868061 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.994841 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:47.017882 2149446 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:48:47.017947 2149446 certs.go:195] generating shared ca certs ...
	I1222 01:48:47.017976 2149446 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.018155 2149446 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:48:47.018235 2149446 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:48:47.018271 2149446 certs.go:257] generating profile certs ...
	I1222 01:48:47.018410 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:48:47.018519 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:48:47.018584 2149446 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:48:47.018730 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:48:47.018786 2149446 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:48:47.018810 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:48:47.018867 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:48:47.018918 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:48:47.018973 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:48:47.019053 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:47.019666 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:48:47.038515 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:48:47.057188 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:48:47.080288 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:48:47.101214 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:48:47.120685 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:48:47.140350 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:48:47.160406 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:48:47.178292 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:48:47.195480 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:48:47.213647 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:48:47.231427 2149446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:48:47.244572 2149446 ssh_runner.go:195] Run: openssl version
	I1222 01:48:47.250948 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.258307 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:48:47.265613 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269481 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.314712 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:48:47.322028 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.329298 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:48:47.336716 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340415 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340535 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.381670 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:48:47.389031 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.396101 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:48:47.403471 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407346 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.449487 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:48:47.456987 2149446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:48:47.460641 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:48:47.501381 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:48:47.542911 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:48:47.587141 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:48:47.628220 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:48:47.670351 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:48:47.715724 2149446 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:47.715830 2149446 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:48:47.715926 2149446 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:48:47.743013 2149446 cri.go:96] found id: ""
	I1222 01:48:47.743133 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:48:47.750858 2149446 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:48:47.750922 2149446 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:48:47.750979 2149446 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:48:47.758109 2149446 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:48:47.758748 2149446 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.759013 2149446 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-704217" cluster setting kubeconfig missing "newest-cni-704217" context setting]
	I1222 01:48:47.759473 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.760858 2149446 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:48:47.768601 2149446 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:48:47.768668 2149446 kubeadm.go:602] duration metric: took 17.733602ms to restartPrimaryControlPlane
	I1222 01:48:47.768685 2149446 kubeadm.go:403] duration metric: took 52.971018ms to StartCluster
	I1222 01:48:47.768701 2149446 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.768757 2149446 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.769685 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.769905 2149446 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:48:47.770207 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:47.770291 2149446 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:48:47.770386 2149446 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-704217"
	I1222 01:48:47.770417 2149446 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-704217"
	I1222 01:48:47.770506 2149446 addons.go:70] Setting dashboard=true in profile "newest-cni-704217"
	I1222 01:48:47.770541 2149446 addons.go:70] Setting default-storageclass=true in profile "newest-cni-704217"
	I1222 01:48:47.770547 2149446 addons.go:239] Setting addon dashboard=true in "newest-cni-704217"
	W1222 01:48:47.770588 2149446 addons.go:248] addon dashboard should already be in state true
	I1222 01:48:47.770610 2149446 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-704217"
	I1222 01:48:47.770621 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.770999 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.771324 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.770518 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.772038 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.775384 2149446 out.go:179] * Verifying Kubernetes components...
	I1222 01:48:47.778350 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:47.802410 2149446 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:48:47.805256 2149446 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:48:47.807963 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:48:47.807985 2149446 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:48:47.808059 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.828399 2149446 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:48:47.831382 2149446 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:47.831415 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:48:47.831508 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.834144 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.844610 2149446 addons.go:239] Setting addon default-storageclass=true in "newest-cni-704217"
	I1222 01:48:47.844670 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.845187 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.877451 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.891162 2149446 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:47.891205 2149446 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:48:47.891280 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.926288 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:48.042819 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:48.112488 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:48:48.112513 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:48:48.128623 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:48:48.128699 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:48:48.148283 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:48:48.148321 2149446 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:48:48.156766 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:48.161900 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:48.167407 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:48:48.167429 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:48:48.203144 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:48:48.203175 2149446 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:48:48.254999 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:48:48.255073 2149446 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:48:48.269219 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:48:48.269248 2149446 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:48:48.282496 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:48:48.282561 2149446 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:48:48.295350 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.295416 2149446 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:48:48.308102 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.846401 2149446 api_server.go:52] waiting for apiserver process to appear ...
	W1222 01:48:48.846466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:48.846520 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:48.846522 2149446 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846636 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846839 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.013510 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:49.073085 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.109119 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:49.170278 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.173753 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.230752 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.347104 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:49.363263 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:49.431947 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.591271 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:49.624743 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.676288 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.711818 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.847130 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.002166 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:50.073384 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.206474 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:46.597453 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:49.097694 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:50.265056 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.347283 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.486129 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:50.543791 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.847346 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.143881 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:51.205787 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.283923 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:51.341457 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.542158 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:51.609543 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.847044 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.045399 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:52.115058 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.347521 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.607702 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:52.683195 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.846571 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.305882 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:53.347187 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:53.376466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.687056 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:53.754007 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.847202 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.994728 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:54.057241 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:54.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:54.846622 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:51.596668 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:53.597225 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:55.347576 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:55.800828 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:55.847504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:55.860654 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.347301 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:56.472869 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:56.531116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.846537 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.346757 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.706997 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:57.770163 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:57.847151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.347452 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.521177 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:58.583041 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:58.847501 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.347147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.847527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.918746 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:59.977904 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.192217 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:55.597516 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:57.597678 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.100741 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.333722 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.346752 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:00.847514 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.346636 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.846688 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.347266 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.846533 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.346590 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.346607 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.761472 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:49:04.773029 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:49:04.847157 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:04.857098 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.857138 2149446 retry.go:84] will retry after 7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:04.857185 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.879231 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:04.950615 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:02.596826 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:05.096792 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:05.347388 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:05.847467 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.346572 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.846691 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.347421 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.347439 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.847592 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.847473 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:07.596780 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:09.597714 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:10.346839 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:10.642468 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:10.708302 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.708346 2149446 retry.go:84] will retry after 9.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.847493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.346650 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846502 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:11.932894 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:12.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:12.847370 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.346723 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.846605 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.347318 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.846610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:12.096720 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:14.596604 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:15.276803 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:15.337116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.337152 2149446 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.347320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:15.847453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.846547 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.347494 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.847450 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.347436 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.846663 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.347534 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.846595 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:16.596756 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:19.096677 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:20.347375 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:20.550393 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:20.629907 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.629948 2149446 retry.go:84] will retry after 13.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.847352 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.347359 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.847541 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.346619 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.846606 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.347285 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.847257 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.346695 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.847195 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:21.096800 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:23.596617 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:25.347322 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:25.847511 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.346649 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.847539 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.261998 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:27.328263 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.328301 2149446 retry.go:84] will retry after 13.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.347392 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.847138 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.346635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.547391 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:28.610028 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:28.847445 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.346661 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.847366 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:25.596765 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:28.096729 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:30.346864 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:30.846648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.846643 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.846656 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.346602 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.846639 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.494891 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:34.556245 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.556282 2149446 retry.go:84] will retry after 24.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.846617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:30.596693 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:33.096665 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:35.596759 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:38.096842 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:38.596364 2142162 node_ready.go:38] duration metric: took 6m0.000147881s for node "no-preload-491054" to be "Ready" ...
	I1222 01:49:38.599511 2142162 out.go:203] 
	W1222 01:49:38.602362 2142162 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:49:38.602381 2142162 out.go:285] * 
	W1222 01:49:38.604626 2142162 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:49:38.608399 2142162 out.go:203] 
	I1222 01:49:35.347129 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:35.846681 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:36.346565 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:36.846669 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:37.347286 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:37.846895 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:38.346662 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:38.847012 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:39.347115 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:39.846591 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:40.347475 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:40.735554 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:40.795474 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:40.795515 2149446 retry.go:84] will retry after 21.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:40.846792 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:41.172727 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:41.231783 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:41.347078 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:41.846609 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:42.346631 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:42.847610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:43.346556 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:43.847443 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:44.347395 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:44.847490 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:45.346921 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:45.846697 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:46.346758 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:46.847524 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:47.346683 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:47.846830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:47.846917 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:47.875900 2149446 cri.go:96] found id: ""
	I1222 01:49:47.875920 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.875929 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:47.875935 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:47.875999 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:47.911877 2149446 cri.go:96] found id: ""
	I1222 01:49:47.911899 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.911908 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:47.911914 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:47.911982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:47.941469 2149446 cri.go:96] found id: ""
	I1222 01:49:47.941492 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.941500 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:47.941507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:47.941566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:47.967048 2149446 cri.go:96] found id: ""
	I1222 01:49:47.967088 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.967097 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:47.967104 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:47.967180 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:47.995110 2149446 cri.go:96] found id: ""
	I1222 01:49:47.995135 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.995144 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:47.995150 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:47.995212 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:48.023445 2149446 cri.go:96] found id: ""
	I1222 01:49:48.023513 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.023525 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:48.023533 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:48.023607 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:48.051956 2149446 cri.go:96] found id: ""
	I1222 01:49:48.052023 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.052044 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:48.052063 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:48.052151 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:48.079789 2149446 cri.go:96] found id: ""
	I1222 01:49:48.079817 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.079827 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:48.079836 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:48.079847 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:48.148881 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:48.148917 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:48.166528 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:48.166562 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:48.230410 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:48.221856    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.222556    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224311    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224847    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.226455    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:48.221856    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.222556    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224311    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224847    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.226455    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:48.230487 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:48.230509 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:48.259694 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:48.259729 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:50.788374 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:50.799012 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:50.799080 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:50.824427 2149446 cri.go:96] found id: ""
	I1222 01:49:50.824451 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.824460 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:50.824467 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:50.824526 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:50.857602 2149446 cri.go:96] found id: ""
	I1222 01:49:50.857626 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.857636 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:50.857642 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:50.857700 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:50.886784 2149446 cri.go:96] found id: ""
	I1222 01:49:50.886809 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.886817 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:50.886824 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:50.886881 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:50.913209 2149446 cri.go:96] found id: ""
	I1222 01:49:50.913239 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.913249 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:50.913256 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:50.913362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:50.939349 2149446 cri.go:96] found id: ""
	I1222 01:49:50.939372 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.939381 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:50.939387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:50.939458 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:50.968002 2149446 cri.go:96] found id: ""
	I1222 01:49:50.968068 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.968080 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:50.968092 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:50.968182 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:50.992529 2149446 cri.go:96] found id: ""
	I1222 01:49:50.992596 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.992618 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:50.992638 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:50.992738 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:51.020072 2149446 cri.go:96] found id: ""
	I1222 01:49:51.020143 2149446 logs.go:282] 0 containers: []
	W1222 01:49:51.020165 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:51.020185 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:51.020224 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:51.085768 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:51.085801 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:51.102878 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:51.102907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:51.169244 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:51.160719    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.161459    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163246    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163762    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.165256    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:51.160719    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.161459    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163246    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163762    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.165256    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:51.169268 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:51.169280 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:51.199236 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:51.199275 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:53.726478 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:53.736764 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:53.736833 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:53.761645 2149446 cri.go:96] found id: ""
	I1222 01:49:53.761673 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.761681 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:53.761693 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:53.761752 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:53.790798 2149446 cri.go:96] found id: ""
	I1222 01:49:53.790875 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.790892 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:53.790899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:53.790960 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:53.814874 2149446 cri.go:96] found id: ""
	I1222 01:49:53.814941 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.814956 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:53.814963 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:53.815022 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:53.839840 2149446 cri.go:96] found id: ""
	I1222 01:49:53.839906 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.839930 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:53.839950 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:53.840033 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:53.875780 2149446 cri.go:96] found id: ""
	I1222 01:49:53.875850 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.875870 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:53.875890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:53.875989 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:53.908372 2149446 cri.go:96] found id: ""
	I1222 01:49:53.908451 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.908473 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:53.908519 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:53.908603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:53.937905 2149446 cri.go:96] found id: ""
	I1222 01:49:53.937975 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.937989 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:53.937997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:53.938056 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:53.964017 2149446 cri.go:96] found id: ""
	I1222 01:49:53.964043 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.964052 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:53.964061 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:53.964072 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:53.980713 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:53.980743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:54.051694 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:54.042644    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.043169    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.044865    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.045545    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.047279    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:54.042644    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.043169    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.044865    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.045545    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.047279    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:54.051718 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:54.051732 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:54.081680 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:54.081715 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:54.113831 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:54.113858 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:56.682134 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:56.692686 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:56.692755 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:56.718335 2149446 cri.go:96] found id: ""
	I1222 01:49:56.718360 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.718369 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:56.718375 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:56.718433 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:56.750169 2149446 cri.go:96] found id: ""
	I1222 01:49:56.750199 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.750207 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:56.750214 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:56.750289 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:56.775009 2149446 cri.go:96] found id: ""
	I1222 01:49:56.775033 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.775043 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:56.775049 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:56.775118 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:56.800202 2149446 cri.go:96] found id: ""
	I1222 01:49:56.800226 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.800245 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:56.800253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:56.800317 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:56.825535 2149446 cri.go:96] found id: ""
	I1222 01:49:56.825559 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.825567 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:56.825574 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:56.825633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:56.860355 2149446 cri.go:96] found id: ""
	I1222 01:49:56.860379 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.860388 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:56.860395 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:56.860455 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:56.900413 2149446 cri.go:96] found id: ""
	I1222 01:49:56.900481 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.900502 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:56.900520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:56.900608 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:56.930278 2149446 cri.go:96] found id: ""
	I1222 01:49:56.930343 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.930365 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:56.930381 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:56.930394 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:56.997740 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:56.997776 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:57.016721 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:57.016750 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:57.084545 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:57.076568    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.077143    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.078617    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.079164    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.080737    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:57.076568    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.077143    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.078617    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.079164    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.080737    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:57.084566 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:57.084580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:57.112759 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:57.112793 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:58.183323 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:58.246165 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:58.246322 2149446 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:49:59.087252 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:59.163517 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:59.163599 2149446 retry.go:84] will retry after 41.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:59.646777 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:59.657009 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:59.657077 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:59.682535 2149446 cri.go:96] found id: ""
	I1222 01:49:59.682562 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.682580 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:59.682587 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:59.682685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:59.712155 2149446 cri.go:96] found id: ""
	I1222 01:49:59.712179 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.712194 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:59.712201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:59.712313 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:59.736832 2149446 cri.go:96] found id: ""
	I1222 01:49:59.736859 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.736868 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:59.736884 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:59.736987 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:59.767806 2149446 cri.go:96] found id: ""
	I1222 01:49:59.767874 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.767897 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:59.767917 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:59.767991 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:59.795439 2149446 cri.go:96] found id: ""
	I1222 01:49:59.795504 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.795529 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:59.795549 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:59.795616 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:59.819847 2149446 cri.go:96] found id: ""
	I1222 01:49:59.819882 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.819892 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:59.819899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:59.819966 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:59.850480 2149446 cri.go:96] found id: ""
	I1222 01:49:59.850509 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.850518 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:59.850524 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:59.850592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:59.884120 2149446 cri.go:96] found id: ""
	I1222 01:49:59.884144 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.884153 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:59.884161 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:59.884173 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:59.957806 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:59.957845 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:59.974377 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:59.974406 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:00.101182 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:00.069859    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.070359    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.072368    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.077118    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.078499    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:00.069859    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.070359    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.072368    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.077118    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.078499    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:00.101220 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:00.101234 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:00.179956 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:00.180075 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:02.382663 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:50:02.449495 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:50:02.449536 2149446 retry.go:84] will retry after 44.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:50:02.730117 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:02.740995 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:02.741072 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:02.768772 2149446 cri.go:96] found id: ""
	I1222 01:50:02.768797 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.768805 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:02.768812 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:02.768871 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:02.794287 2149446 cri.go:96] found id: ""
	I1222 01:50:02.794311 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.794319 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:02.794326 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:02.794389 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:02.819466 2149446 cri.go:96] found id: ""
	I1222 01:50:02.819546 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.819568 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:02.819586 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:02.819669 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:02.853024 2149446 cri.go:96] found id: ""
	I1222 01:50:02.853053 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.853062 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:02.853068 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:02.853137 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:02.880476 2149446 cri.go:96] found id: ""
	I1222 01:50:02.880500 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.880509 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:02.880522 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:02.880594 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:02.911828 2149446 cri.go:96] found id: ""
	I1222 01:50:02.911852 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.911861 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:02.911868 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:02.911945 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:02.938488 2149446 cri.go:96] found id: ""
	I1222 01:50:02.938567 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.938589 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:02.938610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:02.938701 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:02.963720 2149446 cri.go:96] found id: ""
	I1222 01:50:02.963747 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.963756 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:02.963765 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:02.963777 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:03.029961 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:03.030002 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:03.047145 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:03.047175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:03.113634 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:03.105215    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.105984    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.107648    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.108298    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.109821    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:03.105215    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.105984    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.107648    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.108298    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.109821    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:03.113655 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:03.113667 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:03.143385 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:03.143421 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:05.679417 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:05.690070 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:05.690144 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:05.715488 2149446 cri.go:96] found id: ""
	I1222 01:50:05.715513 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.715531 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:05.715538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:05.715604 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:05.746380 2149446 cri.go:96] found id: ""
	I1222 01:50:05.746456 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.746478 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:05.746492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:05.746570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:05.770990 2149446 cri.go:96] found id: ""
	I1222 01:50:05.771014 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.771023 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:05.771030 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:05.771092 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:05.796198 2149446 cri.go:96] found id: ""
	I1222 01:50:05.796220 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.796228 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:05.796235 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:05.796293 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:05.823560 2149446 cri.go:96] found id: ""
	I1222 01:50:05.823587 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.823595 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:05.823601 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:05.823660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:05.856315 2149446 cri.go:96] found id: ""
	I1222 01:50:05.856349 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.856359 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:05.856365 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:05.856425 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:05.895348 2149446 cri.go:96] found id: ""
	I1222 01:50:05.895375 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.895383 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:05.895390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:05.895452 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:05.924398 2149446 cri.go:96] found id: ""
	I1222 01:50:05.924466 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.924478 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:05.924487 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:05.924499 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:05.952966 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:05.953003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:05.984569 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:05.984596 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:06.051373 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:06.051410 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:06.069275 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:06.069345 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:06.136767 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:06.127838    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.128596    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.130370    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.131018    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.132875    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:06.127838    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.128596    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.130370    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.131018    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.132875    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:08.637440 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:08.649276 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:08.649375 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:08.676353 2149446 cri.go:96] found id: ""
	I1222 01:50:08.676377 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.676386 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:08.676393 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:08.676457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:08.702672 2149446 cri.go:96] found id: ""
	I1222 01:50:08.702696 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.702705 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:08.702712 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:08.702773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:08.733007 2149446 cri.go:96] found id: ""
	I1222 01:50:08.733029 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.733038 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:08.733044 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:08.733108 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:08.758339 2149446 cri.go:96] found id: ""
	I1222 01:50:08.758363 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.758371 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:08.758378 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:08.758437 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:08.783187 2149446 cri.go:96] found id: ""
	I1222 01:50:08.783210 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.783219 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:08.783225 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:08.783290 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:08.813005 2149446 cri.go:96] found id: ""
	I1222 01:50:08.813028 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.813036 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:08.813042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:08.813102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:08.840050 2149446 cri.go:96] found id: ""
	I1222 01:50:08.840073 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.840081 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:08.840087 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:08.840232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:08.879901 2149446 cri.go:96] found id: ""
	I1222 01:50:08.879976 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.880013 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:08.880039 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:08.880067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:08.955189 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:08.945414    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.946257    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948041    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948826    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.950714    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:08.945414    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.946257    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948041    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948826    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.950714    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:08.955272 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:08.955297 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:08.983942 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:08.983976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:09.023107 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:09.023137 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:09.089046 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:09.089084 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:11.605972 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:11.618258 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:11.618329 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:11.663499 2149446 cri.go:96] found id: ""
	I1222 01:50:11.663523 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.663531 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:11.663538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:11.663603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:11.693496 2149446 cri.go:96] found id: ""
	I1222 01:50:11.693524 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.693533 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:11.693539 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:11.693610 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:11.719147 2149446 cri.go:96] found id: ""
	I1222 01:50:11.719173 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.719183 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:11.719189 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:11.719256 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:11.744820 2149446 cri.go:96] found id: ""
	I1222 01:50:11.744845 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.744854 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:11.744860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:11.744919 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:11.770539 2149446 cri.go:96] found id: ""
	I1222 01:50:11.770566 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.770575 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:11.770582 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:11.770643 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:11.796499 2149446 cri.go:96] found id: ""
	I1222 01:50:11.796522 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.796531 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:11.796538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:11.796596 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:11.821798 2149446 cri.go:96] found id: ""
	I1222 01:50:11.821822 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.821831 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:11.821837 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:11.821901 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:11.855646 2149446 cri.go:96] found id: ""
	I1222 01:50:11.855668 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.855677 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:11.855685 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:11.855697 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:11.897110 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:11.897191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:11.968365 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:11.968400 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:11.985862 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:11.985890 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:12.057003 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:12.047720    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.048930    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.049816    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.050506    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.052441    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:12.047720    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.048930    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.049816    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.050506    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.052441    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:12.057028 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:12.057044 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:14.587717 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:14.601824 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:14.601894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:14.633615 2149446 cri.go:96] found id: ""
	I1222 01:50:14.633639 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.633651 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:14.633667 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:14.633734 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:14.665995 2149446 cri.go:96] found id: ""
	I1222 01:50:14.666024 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.666034 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:14.666040 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:14.666104 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:14.694750 2149446 cri.go:96] found id: ""
	I1222 01:50:14.694832 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.694846 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:14.694854 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:14.694914 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:14.721367 2149446 cri.go:96] found id: ""
	I1222 01:50:14.721391 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.721401 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:14.721407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:14.721465 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:14.746783 2149446 cri.go:96] found id: ""
	I1222 01:50:14.746864 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.746880 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:14.746887 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:14.746951 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:14.776055 2149446 cri.go:96] found id: ""
	I1222 01:50:14.776078 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.776087 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:14.776094 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:14.776154 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:14.802137 2149446 cri.go:96] found id: ""
	I1222 01:50:14.802162 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.802171 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:14.802178 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:14.802236 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:14.827099 2149446 cri.go:96] found id: ""
	I1222 01:50:14.827176 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.827192 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:14.827203 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:14.827215 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:14.843942 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:14.843972 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:14.924047 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:14.915431    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.916077    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918044    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918580    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.920233    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:14.915431    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.916077    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918044    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918580    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.920233    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:14.924081 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:14.924094 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:14.952630 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:14.952666 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:14.981965 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:14.981992 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:17.554296 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:17.564949 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:17.565018 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:17.596428 2149446 cri.go:96] found id: ""
	I1222 01:50:17.596451 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.596465 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:17.596472 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:17.596531 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:17.629782 2149446 cri.go:96] found id: ""
	I1222 01:50:17.629804 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.629812 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:17.629820 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:17.629877 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:17.658278 2149446 cri.go:96] found id: ""
	I1222 01:50:17.658305 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.658313 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:17.658319 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:17.658384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:17.684159 2149446 cri.go:96] found id: ""
	I1222 01:50:17.684227 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.684248 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:17.684268 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:17.684395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:17.712612 2149446 cri.go:96] found id: ""
	I1222 01:50:17.712682 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.712713 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:17.712751 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:17.712835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:17.740669 2149446 cri.go:96] found id: ""
	I1222 01:50:17.740737 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.740758 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:17.740779 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:17.740868 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:17.767931 2149446 cri.go:96] found id: ""
	I1222 01:50:17.767998 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.768021 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:17.768042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:17.768129 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:17.793937 2149446 cri.go:96] found id: ""
	I1222 01:50:17.794002 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.794015 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:17.794025 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:17.794037 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:17.860940 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:17.860976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:17.885026 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:17.885106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:17.949991 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:17.942003    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.942521    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944226    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944614    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.946054    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:17.942003    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.942521    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944226    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944614    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.946054    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:17.950015 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:17.950028 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:17.978296 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:17.978329 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:20.513236 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:20.523539 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:20.523613 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:20.548569 2149446 cri.go:96] found id: ""
	I1222 01:50:20.548592 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.548600 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:20.548607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:20.548670 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:20.573066 2149446 cri.go:96] found id: ""
	I1222 01:50:20.573145 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.573167 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:20.573185 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:20.573276 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:20.604633 2149446 cri.go:96] found id: ""
	I1222 01:50:20.604659 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.604668 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:20.604675 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:20.604735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:20.635610 2149446 cri.go:96] found id: ""
	I1222 01:50:20.635636 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.635645 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:20.635651 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:20.635717 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:20.670374 2149446 cri.go:96] found id: ""
	I1222 01:50:20.670441 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.670455 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:20.670462 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:20.670524 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:20.699800 2149446 cri.go:96] found id: ""
	I1222 01:50:20.699822 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.699830 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:20.699837 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:20.699894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:20.728024 2149446 cri.go:96] found id: ""
	I1222 01:50:20.728048 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.728057 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:20.728063 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:20.728122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:20.758174 2149446 cri.go:96] found id: ""
	I1222 01:50:20.758198 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.758206 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:20.758215 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:20.758227 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:20.823612 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:20.823650 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:20.840051 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:20.840081 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:20.925259 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:20.915370    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.916028    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.918122    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.919738    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.921230    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:20.915370    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.916028    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.918122    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.919738    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.921230    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:20.925294 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:20.925307 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:20.955201 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:20.955234 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:23.483497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:23.494221 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:23.494298 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:23.523189 2149446 cri.go:96] found id: ""
	I1222 01:50:23.523214 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.523223 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:23.523229 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:23.523289 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:23.550633 2149446 cri.go:96] found id: ""
	I1222 01:50:23.550656 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.550666 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:23.550672 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:23.550730 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:23.576289 2149446 cri.go:96] found id: ""
	I1222 01:50:23.576312 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.576328 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:23.576335 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:23.576395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:23.611281 2149446 cri.go:96] found id: ""
	I1222 01:50:23.611304 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.611312 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:23.611320 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:23.611381 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:23.640498 2149446 cri.go:96] found id: ""
	I1222 01:50:23.640520 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.640529 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:23.640535 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:23.640595 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:23.674625 2149446 cri.go:96] found id: ""
	I1222 01:50:23.674649 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.674658 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:23.674665 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:23.674723 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:23.699746 2149446 cri.go:96] found id: ""
	I1222 01:50:23.699771 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.699780 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:23.699786 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:23.699849 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:23.724314 2149446 cri.go:96] found id: ""
	I1222 01:50:23.724345 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.724353 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:23.724369 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:23.724381 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:23.740816 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:23.740846 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:23.813467 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:23.804843    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.805658    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807401    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807908    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.809520    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:23.804843    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.805658    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807401    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807908    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.809520    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:23.813488 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:23.813501 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:23.844492 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:23.844529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:23.883925 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:23.884003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:26.466430 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:26.480754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:26.480834 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:26.505742 2149446 cri.go:96] found id: ""
	I1222 01:50:26.505825 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.505856 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:26.505876 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:26.505986 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:26.532705 2149446 cri.go:96] found id: ""
	I1222 01:50:26.532781 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.532816 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:26.532839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:26.532932 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:26.557032 2149446 cri.go:96] found id: ""
	I1222 01:50:26.557116 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.557139 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:26.557170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:26.557247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:26.583725 2149446 cri.go:96] found id: ""
	I1222 01:50:26.583812 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.583835 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:26.583853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:26.583962 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:26.617024 2149446 cri.go:96] found id: ""
	I1222 01:50:26.617089 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.617121 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:26.617140 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:26.617246 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:26.646386 2149446 cri.go:96] found id: ""
	I1222 01:50:26.646464 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.646488 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:26.646507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:26.646612 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:26.678246 2149446 cri.go:96] found id: ""
	I1222 01:50:26.678269 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.678278 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:26.678285 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:26.678347 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:26.703982 2149446 cri.go:96] found id: ""
	I1222 01:50:26.704008 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.704017 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:26.704026 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:26.704042 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:26.720933 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:26.720994 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:26.786938 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:26.778772    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.779363    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781037    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781428    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.782886    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:26.778772    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.779363    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781037    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781428    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.782886    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:26.786961 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:26.786974 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:26.815331 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:26.815367 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:26.845602 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:26.845630 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:29.422527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:29.433711 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:29.433793 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:29.465497 2149446 cri.go:96] found id: ""
	I1222 01:50:29.465548 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.465558 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:29.465564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:29.465634 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:29.490894 2149446 cri.go:96] found id: ""
	I1222 01:50:29.490919 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.490927 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:29.490938 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:29.490998 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:29.520626 2149446 cri.go:96] found id: ""
	I1222 01:50:29.520661 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.520670 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:29.520677 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:29.520747 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:29.547848 2149446 cri.go:96] found id: ""
	I1222 01:50:29.547885 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.547893 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:29.547900 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:29.547982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:29.577023 2149446 cri.go:96] found id: ""
	I1222 01:50:29.577048 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.577057 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:29.577065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:29.577127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:29.611778 2149446 cri.go:96] found id: ""
	I1222 01:50:29.611814 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.611823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:29.611830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:29.611903 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:29.643842 2149446 cri.go:96] found id: ""
	I1222 01:50:29.643918 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.643941 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:29.643958 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:29.644044 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:29.674585 2149446 cri.go:96] found id: ""
	I1222 01:50:29.674658 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.674680 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:29.674704 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:29.674743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:29.703700 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:29.703743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:29.735551 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:29.735577 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:29.800624 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:29.800662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:29.817316 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:29.817364 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:29.907057 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:29.895787    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.896652    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898306    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898827    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.902956    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:29.895787    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.896652    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898306    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898827    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.902956    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:32.407335 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:32.417713 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:32.417782 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:32.443620 2149446 cri.go:96] found id: ""
	I1222 01:50:32.443648 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.443656 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:32.443662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:32.443722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:32.473803 2149446 cri.go:96] found id: ""
	I1222 01:50:32.473831 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.473840 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:32.473847 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:32.473908 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:32.503900 2149446 cri.go:96] found id: ""
	I1222 01:50:32.503928 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.503936 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:32.503943 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:32.504007 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:32.533317 2149446 cri.go:96] found id: ""
	I1222 01:50:32.533427 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.533436 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:32.533442 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:32.533503 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:32.558269 2149446 cri.go:96] found id: ""
	I1222 01:50:32.558292 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.558300 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:32.558307 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:32.558366 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:32.583954 2149446 cri.go:96] found id: ""
	I1222 01:50:32.583976 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.583984 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:32.583990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:32.584048 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:32.619938 2149446 cri.go:96] found id: ""
	I1222 01:50:32.619959 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.619967 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:32.619974 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:32.620032 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:32.646855 2149446 cri.go:96] found id: ""
	I1222 01:50:32.646876 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.646883 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:32.646892 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:32.646903 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:32.681292 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:32.681331 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:32.709454 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:32.709486 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:32.774478 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:32.774512 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:32.790947 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:32.790977 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:32.865716 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:32.856728    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.857643    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.858424    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.859539    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.860242    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:32.856728    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.857643    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.858424    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.859539    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.860242    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:35.366405 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:35.377118 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:35.377211 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:35.405034 2149446 cri.go:96] found id: ""
	I1222 01:50:35.405073 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.405083 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:35.405089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:35.405172 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:35.429958 2149446 cri.go:96] found id: ""
	I1222 01:50:35.429985 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.429994 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:35.430002 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:35.430063 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:35.455069 2149446 cri.go:96] found id: ""
	I1222 01:50:35.455094 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.455103 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:35.455112 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:35.455173 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:35.482985 2149446 cri.go:96] found id: ""
	I1222 01:50:35.483053 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.483075 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:35.483089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:35.483165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:35.506805 2149446 cri.go:96] found id: ""
	I1222 01:50:35.506874 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.506894 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:35.506909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:35.506983 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:35.532897 2149446 cri.go:96] found id: ""
	I1222 01:50:35.532925 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.532934 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:35.532941 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:35.533002 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:35.564747 2149446 cri.go:96] found id: ""
	I1222 01:50:35.564774 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.564783 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:35.564789 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:35.564858 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:35.589912 2149446 cri.go:96] found id: ""
	I1222 01:50:35.589939 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.589948 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:35.589956 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:35.589969 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:35.665301 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:35.665441 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:35.682348 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:35.682377 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:35.749489 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:35.741663    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.742347    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.743818    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.744264    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.745698    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:35.741663    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.742347    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.743818    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.744264    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.745698    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:35.749511 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:35.749524 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:35.776750 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:35.776782 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:38.306113 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:38.316656 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:38.316729 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:38.343191 2149446 cri.go:96] found id: ""
	I1222 01:50:38.343229 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.343238 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:38.343245 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:38.343311 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:38.371584 2149446 cri.go:96] found id: ""
	I1222 01:50:38.371609 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.371618 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:38.371625 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:38.371686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:38.399378 2149446 cri.go:96] found id: ""
	I1222 01:50:38.399401 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.399410 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:38.399416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:38.399476 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:38.424466 2149446 cri.go:96] found id: ""
	I1222 01:50:38.424492 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.424500 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:38.424513 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:38.424575 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:38.449419 2149446 cri.go:96] found id: ""
	I1222 01:50:38.449452 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.449461 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:38.449468 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:38.449533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:38.475321 2149446 cri.go:96] found id: ""
	I1222 01:50:38.475347 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.475356 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:38.475363 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:38.475423 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:38.499428 2149446 cri.go:96] found id: ""
	I1222 01:50:38.499458 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.499469 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:38.499476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:38.499541 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:38.524738 2149446 cri.go:96] found id: ""
	I1222 01:50:38.524766 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.524780 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:38.524789 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:38.524800 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:38.557984 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:38.558022 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:38.589548 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:38.589577 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:38.671416 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:38.671453 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:38.688135 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:38.688164 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:38.751396 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:38.743333    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.744008    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.745568    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.746050    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.747565    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:38.743333    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.744008    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.745568    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.746050    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.747565    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:40.899829 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:50:40.960744 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:50:40.960855 2149446 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:50:41.252211 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:41.263048 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:41.263123 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:41.296135 2149446 cri.go:96] found id: ""
	I1222 01:50:41.296159 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.296167 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:41.296174 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:41.296232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:41.322198 2149446 cri.go:96] found id: ""
	I1222 01:50:41.322277 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.322291 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:41.322299 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:41.322360 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:41.348725 2149446 cri.go:96] found id: ""
	I1222 01:50:41.348750 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.348760 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:41.348766 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:41.348825 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:41.374926 2149446 cri.go:96] found id: ""
	I1222 01:50:41.375005 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.375021 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:41.375029 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:41.375096 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:41.406446 2149446 cri.go:96] found id: ""
	I1222 01:50:41.406537 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.406556 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:41.406566 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:41.406648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:41.432212 2149446 cri.go:96] found id: ""
	I1222 01:50:41.432239 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.432247 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:41.432254 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:41.432321 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:41.463730 2149446 cri.go:96] found id: ""
	I1222 01:50:41.463755 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.463765 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:41.463771 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:41.463833 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:41.490157 2149446 cri.go:96] found id: ""
	I1222 01:50:41.490182 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.490193 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:41.490201 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:41.490213 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:41.558943 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:41.558981 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:41.576405 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:41.576435 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:41.666063 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:41.657908    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.658547    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660067    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660654    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.662156    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:41.657908    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.658547    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660067    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660654    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.662156    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:41.666083 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:41.666095 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:41.696072 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:41.696105 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:44.226469 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:44.236983 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:44.237055 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:44.261867 2149446 cri.go:96] found id: ""
	I1222 01:50:44.261892 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.261900 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:44.261907 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:44.261971 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:44.287438 2149446 cri.go:96] found id: ""
	I1222 01:50:44.287462 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.287471 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:44.287478 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:44.287536 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:44.313755 2149446 cri.go:96] found id: ""
	I1222 01:50:44.313779 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.313788 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:44.313794 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:44.313861 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:44.347563 2149446 cri.go:96] found id: ""
	I1222 01:50:44.347589 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.347597 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:44.347604 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:44.347688 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:44.377289 2149446 cri.go:96] found id: ""
	I1222 01:50:44.377313 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.377354 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:44.377362 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:44.377443 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:44.403181 2149446 cri.go:96] found id: ""
	I1222 01:50:44.403207 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.403215 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:44.403222 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:44.403280 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:44.432425 2149446 cri.go:96] found id: ""
	I1222 01:50:44.432449 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.432457 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:44.432465 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:44.432529 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:44.458001 2149446 cri.go:96] found id: ""
	I1222 01:50:44.458030 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.458039 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:44.458048 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:44.458089 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:44.485816 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:44.485844 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:44.552935 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:44.552970 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:44.569824 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:44.569852 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:44.652688 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:44.644560    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.645403    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647098    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647385    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.648885    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:44.644560    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.645403    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647098    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647385    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.648885    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:44.652760 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:44.652785 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:46.992640 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:50:47.054063 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:50:47.054189 2149446 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:50:47.057341 2149446 out.go:179] * Enabled addons: 
	I1222 01:50:47.060205 2149446 addons.go:530] duration metric: took 1m59.289939725s for enable addons: enabled=[]
	I1222 01:50:47.182147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:47.192818 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:47.192903 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:47.224024 2149446 cri.go:96] found id: ""
	I1222 01:50:47.224046 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.224055 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:47.224061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:47.224122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:47.250330 2149446 cri.go:96] found id: ""
	I1222 01:50:47.250357 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.250365 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:47.250372 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:47.250431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:47.275271 2149446 cri.go:96] found id: ""
	I1222 01:50:47.275294 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.275302 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:47.275309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:47.275370 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:47.300369 2149446 cri.go:96] found id: ""
	I1222 01:50:47.300394 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.300403 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:47.300409 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:47.300468 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:47.325147 2149446 cri.go:96] found id: ""
	I1222 01:50:47.325213 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.325234 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:47.325253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:47.325362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:47.353926 2149446 cri.go:96] found id: ""
	I1222 01:50:47.353953 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.353961 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:47.353968 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:47.354028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:47.378935 2149446 cri.go:96] found id: ""
	I1222 01:50:47.378961 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.378970 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:47.378977 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:47.379040 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:47.404019 2149446 cri.go:96] found id: ""
	I1222 01:50:47.404042 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.404051 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:47.404061 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:47.404078 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:47.473731 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:47.473766 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:47.490597 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:47.490625 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:47.557658 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:47.550047    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.550549    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552046    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552472    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.553897    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:47.550047    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.550549    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552046    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552472    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.553897    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:47.557681 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:47.557694 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:47.586853 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:47.586888 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:50.125437 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:50.138125 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:50.138195 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:50.166675 2149446 cri.go:96] found id: ""
	I1222 01:50:50.166698 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.166706 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:50.166712 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:50.166775 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:50.195252 2149446 cri.go:96] found id: ""
	I1222 01:50:50.195277 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.195285 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:50.195293 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:50.195354 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:50.221357 2149446 cri.go:96] found id: ""
	I1222 01:50:50.221390 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.221406 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:50.221413 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:50.221504 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:50.247114 2149446 cri.go:96] found id: ""
	I1222 01:50:50.247137 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.247145 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:50.247163 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:50.247221 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:50.272457 2149446 cri.go:96] found id: ""
	I1222 01:50:50.272482 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.272491 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:50.272497 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:50.272558 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:50.302435 2149446 cri.go:96] found id: ""
	I1222 01:50:50.302464 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.302473 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:50.302479 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:50.302554 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:50.325940 2149446 cri.go:96] found id: ""
	I1222 01:50:50.326010 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.326025 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:50.326033 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:50.326095 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:50.355131 2149446 cri.go:96] found id: ""
	I1222 01:50:50.355156 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.355167 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:50.355176 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:50.355219 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:50.421016 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:50.421050 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:50.438999 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:50.439026 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:50.505148 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:50.497150    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.497999    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499544    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499840    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.501315    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:50.497150    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.497999    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499544    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499840    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.501315    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:50.505167 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:50.505178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:50.533117 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:50.533151 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:53.063902 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:53.074689 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:53.074763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:53.105072 2149446 cri.go:96] found id: ""
	I1222 01:50:53.105098 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.105108 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:53.105114 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:53.105178 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:53.135898 2149446 cri.go:96] found id: ""
	I1222 01:50:53.135924 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.135933 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:53.135939 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:53.135999 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:53.167436 2149446 cri.go:96] found id: ""
	I1222 01:50:53.167461 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.167469 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:53.167476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:53.167559 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:53.194408 2149446 cri.go:96] found id: ""
	I1222 01:50:53.194523 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.194547 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:53.194566 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:53.194648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:53.219170 2149446 cri.go:96] found id: ""
	I1222 01:50:53.219232 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.219246 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:53.219259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:53.219319 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:53.245034 2149446 cri.go:96] found id: ""
	I1222 01:50:53.245059 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.245067 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:53.245074 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:53.245131 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:53.271853 2149446 cri.go:96] found id: ""
	I1222 01:50:53.271884 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.271893 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:53.271899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:53.271970 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:53.296192 2149446 cri.go:96] found id: ""
	I1222 01:50:53.296225 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.296234 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:53.296243 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:53.296259 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:53.363334 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:53.363372 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:53.381487 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:53.381517 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:53.448549 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:53.440319    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.440874    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.442515    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.443004    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.444491    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:53.440319    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.440874    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.442515    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.443004    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.444491    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:53.448571 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:53.448583 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:53.481685 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:53.481727 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:56.012872 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:56.025767 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:56.025845 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:56.053844 2149446 cri.go:96] found id: ""
	I1222 01:50:56.053873 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.053882 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:56.053890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:56.053961 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:56.082849 2149446 cri.go:96] found id: ""
	I1222 01:50:56.082873 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.082882 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:56.082888 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:56.082964 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:56.116254 2149446 cri.go:96] found id: ""
	I1222 01:50:56.116336 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.116359 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:56.116379 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:56.116489 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:56.157611 2149446 cri.go:96] found id: ""
	I1222 01:50:56.157636 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.157644 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:56.157662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:56.157721 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:56.183024 2149446 cri.go:96] found id: ""
	I1222 01:50:56.183102 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.183123 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:56.183137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:56.183214 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:56.211789 2149446 cri.go:96] found id: ""
	I1222 01:50:56.211814 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.211823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:56.211830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:56.211888 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:56.238212 2149446 cri.go:96] found id: ""
	I1222 01:50:56.238286 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.238309 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:56.238332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:56.238421 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:56.269053 2149446 cri.go:96] found id: ""
	I1222 01:50:56.269079 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.269088 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:56.269097 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:56.269110 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:56.339489 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:56.339527 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:56.356488 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:56.356627 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:56.423568 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:56.415753    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.416507    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418088    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418397    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.419880    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:56.415753    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.416507    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418088    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418397    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.419880    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:56.423639 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:56.423664 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:56.452827 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:56.452859 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:58.984493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:58.994909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:58.994982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:59.022329 2149446 cri.go:96] found id: ""
	I1222 01:50:59.022354 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.022363 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:59.022369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:59.022429 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:59.047878 2149446 cri.go:96] found id: ""
	I1222 01:50:59.047901 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.047909 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:59.047915 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:59.047975 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:59.072596 2149446 cri.go:96] found id: ""
	I1222 01:50:59.072622 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.072631 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:59.072637 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:59.072696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:59.104362 2149446 cri.go:96] found id: ""
	I1222 01:50:59.104391 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.104400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:59.104406 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:59.104470 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:59.143935 2149446 cri.go:96] found id: ""
	I1222 01:50:59.143960 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.143970 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:59.143977 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:59.144037 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:59.172279 2149446 cri.go:96] found id: ""
	I1222 01:50:59.172305 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.172314 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:59.172321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:59.172390 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:59.197781 2149446 cri.go:96] found id: ""
	I1222 01:50:59.197804 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.197813 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:59.197819 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:59.197879 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:59.223417 2149446 cri.go:96] found id: ""
	I1222 01:50:59.223441 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.223450 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:59.223459 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:59.223470 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:59.289154 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:59.289191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:59.306341 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:59.306372 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:59.371286 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:59.363578    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.364246    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.365700    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.366151    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.367566    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:59.363578    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.364246    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.365700    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.366151    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.367566    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:59.371309 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:59.371321 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:59.399863 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:59.399898 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:01.927120 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:01.939501 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:01.939576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:01.967293 2149446 cri.go:96] found id: ""
	I1222 01:51:01.967326 2149446 logs.go:282] 0 containers: []
	W1222 01:51:01.967336 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:01.967343 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:01.967430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:02.010081 2149446 cri.go:96] found id: ""
	I1222 01:51:02.010124 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.010134 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:02.010141 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:02.010231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:02.037304 2149446 cri.go:96] found id: ""
	I1222 01:51:02.037354 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.037363 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:02.037369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:02.037430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:02.065893 2149446 cri.go:96] found id: ""
	I1222 01:51:02.065917 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.065926 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:02.065932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:02.065992 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:02.098337 2149446 cri.go:96] found id: ""
	I1222 01:51:02.098366 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.098375 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:02.098383 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:02.098448 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:02.137900 2149446 cri.go:96] found id: ""
	I1222 01:51:02.137929 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.137939 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:02.137946 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:02.138104 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:02.170824 2149446 cri.go:96] found id: ""
	I1222 01:51:02.170852 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.170861 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:02.170868 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:02.170947 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:02.206007 2149446 cri.go:96] found id: ""
	I1222 01:51:02.206032 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.206041 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:02.206050 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:02.206062 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:02.273536 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:02.273574 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:02.290756 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:02.290788 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:02.358784 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:02.350805    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.351393    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.352892    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.353378    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.355003    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:02.350805    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.351393    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.352892    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.353378    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.355003    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:02.358806 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:02.358821 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:02.386732 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:02.386772 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:04.916635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:04.927272 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:04.927352 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:04.952318 2149446 cri.go:96] found id: ""
	I1222 01:51:04.952350 2149446 logs.go:282] 0 containers: []
	W1222 01:51:04.952358 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:04.952364 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:04.952424 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:04.980287 2149446 cri.go:96] found id: ""
	I1222 01:51:04.980309 2149446 logs.go:282] 0 containers: []
	W1222 01:51:04.980317 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:04.980338 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:04.980398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:05.011897 2149446 cri.go:96] found id: ""
	I1222 01:51:05.011975 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.011998 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:05.012019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:05.012114 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:05.039439 2149446 cri.go:96] found id: ""
	I1222 01:51:05.039464 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.039473 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:05.039480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:05.039562 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:05.065276 2149446 cri.go:96] found id: ""
	I1222 01:51:05.065309 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.065318 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:05.065352 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:05.065421 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:05.104171 2149446 cri.go:96] found id: ""
	I1222 01:51:05.104250 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.104272 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:05.104293 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:05.104397 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:05.140837 2149446 cri.go:96] found id: ""
	I1222 01:51:05.140872 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.140882 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:05.140889 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:05.140962 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:05.174413 2149446 cri.go:96] found id: ""
	I1222 01:51:05.174439 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.174448 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:05.174457 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:05.174469 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:05.241273 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:05.232831    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.233309    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235135    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235674    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.237437    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:05.232831    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.233309    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235135    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235674    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.237437    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:05.241367 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:05.241395 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:05.269917 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:05.269947 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:05.298262 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:05.298289 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:05.366093 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:05.366127 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:07.884446 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:07.897419 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:07.897506 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:07.923443 2149446 cri.go:96] found id: ""
	I1222 01:51:07.923466 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.923474 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:07.923481 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:07.923538 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:07.949573 2149446 cri.go:96] found id: ""
	I1222 01:51:07.949595 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.949604 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:07.949610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:07.949672 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:07.974625 2149446 cri.go:96] found id: ""
	I1222 01:51:07.974648 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.974657 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:07.974664 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:07.974722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:08.000701 2149446 cri.go:96] found id: ""
	I1222 01:51:08.000729 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.000738 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:08.000746 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:08.000823 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:08.030073 2149446 cri.go:96] found id: ""
	I1222 01:51:08.030098 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.030106 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:08.030113 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:08.030184 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:08.056629 2149446 cri.go:96] found id: ""
	I1222 01:51:08.056665 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.056675 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:08.056682 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:08.056742 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:08.086264 2149446 cri.go:96] found id: ""
	I1222 01:51:08.086299 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.086308 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:08.086320 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:08.086393 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:08.116679 2149446 cri.go:96] found id: ""
	I1222 01:51:08.116709 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.116719 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:08.116728 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:08.116748 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:08.157494 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:08.157521 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:08.233683 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:08.233771 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:08.250905 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:08.250934 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:08.316451 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:08.307255    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.308718    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.309622    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311377    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311630    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:08.307255    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.308718    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.309622    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311377    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311630    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:08.316472 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:08.316484 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:10.846347 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:10.861517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:10.861699 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:10.899351 2149446 cri.go:96] found id: ""
	I1222 01:51:10.899458 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.899504 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:10.899543 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:10.899659 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:10.929750 2149446 cri.go:96] found id: ""
	I1222 01:51:10.929818 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.929839 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:10.929851 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:10.929929 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:10.960575 2149446 cri.go:96] found id: ""
	I1222 01:51:10.960610 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.960619 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:10.960626 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:10.960698 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:10.985907 2149446 cri.go:96] found id: ""
	I1222 01:51:10.985932 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.985940 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:10.985947 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:10.986010 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:11.014799 2149446 cri.go:96] found id: ""
	I1222 01:51:11.014878 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.014893 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:11.014901 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:11.014965 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:11.040928 2149446 cri.go:96] found id: ""
	I1222 01:51:11.040960 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.040970 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:11.040976 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:11.041078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:11.067246 2149446 cri.go:96] found id: ""
	I1222 01:51:11.067327 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.067351 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:11.067365 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:11.067442 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:11.095605 2149446 cri.go:96] found id: ""
	I1222 01:51:11.095631 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.095640 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:11.095650 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:11.095662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:11.169987 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:11.170076 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:11.186770 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:11.186800 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:11.248524 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:11.240323    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.241076    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242336    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242961    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.244559    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:11.240323    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.241076    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242336    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242961    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.244559    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:11.248546 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:11.248559 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:11.276178 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:11.276215 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:13.807549 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:13.818390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:13.818463 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:13.844700 2149446 cri.go:96] found id: ""
	I1222 01:51:13.844722 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.844730 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:13.844736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:13.844795 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:13.875626 2149446 cri.go:96] found id: ""
	I1222 01:51:13.875648 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.875656 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:13.875662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:13.875727 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:13.904646 2149446 cri.go:96] found id: ""
	I1222 01:51:13.904667 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.904676 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:13.904682 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:13.904740 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:13.933165 2149446 cri.go:96] found id: ""
	I1222 01:51:13.933241 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.933263 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:13.933282 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:13.933402 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:13.961468 2149446 cri.go:96] found id: ""
	I1222 01:51:13.961492 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.961501 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:13.961507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:13.961576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:13.985666 2149446 cri.go:96] found id: ""
	I1222 01:51:13.985689 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.985697 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:13.985704 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:13.985760 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:14.018377 2149446 cri.go:96] found id: ""
	I1222 01:51:14.018402 2149446 logs.go:282] 0 containers: []
	W1222 01:51:14.018411 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:14.018417 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:14.018484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:14.044854 2149446 cri.go:96] found id: ""
	I1222 01:51:14.044879 2149446 logs.go:282] 0 containers: []
	W1222 01:51:14.044888 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:14.044897 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:14.044933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:14.110126 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:14.110206 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:14.128030 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:14.128108 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:14.196957 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:14.188697    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.189523    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191343    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191689    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.193153    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:14.188697    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.189523    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191343    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191689    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.193153    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:14.197029 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:14.197056 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:14.226166 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:14.226203 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:16.756369 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:16.767083 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:16.767155 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:16.793678 2149446 cri.go:96] found id: ""
	I1222 01:51:16.793704 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.793712 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:16.793719 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:16.793779 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:16.819158 2149446 cri.go:96] found id: ""
	I1222 01:51:16.819183 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.819192 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:16.819198 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:16.819260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:16.848081 2149446 cri.go:96] found id: ""
	I1222 01:51:16.848103 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.848112 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:16.848119 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:16.848178 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:16.889132 2149446 cri.go:96] found id: ""
	I1222 01:51:16.889210 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.889233 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:16.889253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:16.889357 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:16.922581 2149446 cri.go:96] found id: ""
	I1222 01:51:16.922656 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.922677 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:16.922697 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:16.922788 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:16.947759 2149446 cri.go:96] found id: ""
	I1222 01:51:16.947833 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.947855 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:16.947876 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:16.947959 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:16.973072 2149446 cri.go:96] found id: ""
	I1222 01:51:16.973106 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.973115 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:16.973138 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:16.973219 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:17.003269 2149446 cri.go:96] found id: ""
	I1222 01:51:17.003303 2149446 logs.go:282] 0 containers: []
	W1222 01:51:17.003328 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:17.003339 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:17.003371 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:17.070360 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:17.070398 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:17.087501 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:17.087530 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:17.166797 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:17.158419    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.159432    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161029    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161376    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.162807    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:17.158419    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.159432    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161029    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161376    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.162807    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:17.166818 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:17.166830 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:17.196986 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:17.197018 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:19.726818 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:19.737430 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:19.737505 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:19.761268 2149446 cri.go:96] found id: ""
	I1222 01:51:19.761291 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.761300 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:19.761306 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:19.761388 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:19.787563 2149446 cri.go:96] found id: ""
	I1222 01:51:19.787587 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.787596 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:19.787602 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:19.787665 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:19.816441 2149446 cri.go:96] found id: ""
	I1222 01:51:19.816468 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.816477 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:19.816483 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:19.816540 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:19.851780 2149446 cri.go:96] found id: ""
	I1222 01:51:19.851812 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.851821 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:19.851827 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:19.851895 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:19.888751 2149446 cri.go:96] found id: ""
	I1222 01:51:19.888785 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.888794 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:19.888800 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:19.888866 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:19.923050 2149446 cri.go:96] found id: ""
	I1222 01:51:19.923078 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.923087 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:19.923093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:19.923151 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:19.948402 2149446 cri.go:96] found id: ""
	I1222 01:51:19.948435 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.948444 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:19.948451 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:19.948520 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:19.973868 2149446 cri.go:96] found id: ""
	I1222 01:51:19.973894 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.973903 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:19.973912 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:19.973942 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:20.041369 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:20.032095    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.032873    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.034426    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.035065    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.036800    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:20.032095    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.032873    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.034426    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.035065    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.036800    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:20.041391 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:20.041404 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:20.070678 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:20.070728 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:20.112858 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:20.112886 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:20.189261 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:20.189300 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:22.708863 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:22.719695 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:22.719790 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:22.746691 2149446 cri.go:96] found id: ""
	I1222 01:51:22.746715 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.746724 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:22.746731 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:22.746790 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:22.775484 2149446 cri.go:96] found id: ""
	I1222 01:51:22.775511 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.775521 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:22.775527 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:22.775592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:22.801739 2149446 cri.go:96] found id: ""
	I1222 01:51:22.801771 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.801781 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:22.801787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:22.801855 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:22.826408 2149446 cri.go:96] found id: ""
	I1222 01:51:22.826431 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.826439 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:22.826446 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:22.826510 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:22.857976 2149446 cri.go:96] found id: ""
	I1222 01:51:22.858000 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.858009 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:22.858016 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:22.858076 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:22.895329 2149446 cri.go:96] found id: ""
	I1222 01:51:22.895357 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.895365 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:22.895372 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:22.895431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:22.924900 2149446 cri.go:96] found id: ""
	I1222 01:51:22.924923 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.924932 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:22.924938 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:22.924997 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:22.949093 2149446 cri.go:96] found id: ""
	I1222 01:51:22.949166 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.949177 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:22.949214 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:22.949228 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:22.978825 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:22.978862 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:23.008996 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:23.009026 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:23.075469 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:23.075506 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:23.092176 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:23.092253 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:23.168462 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:23.160726    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.161296    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.162792    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.163269    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.164704    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:23.160726    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.161296    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.162792    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.163269    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.164704    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:25.669110 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:25.679471 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:25.679544 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:25.707773 2149446 cri.go:96] found id: ""
	I1222 01:51:25.707798 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.707807 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:25.707814 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:25.707872 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:25.734428 2149446 cri.go:96] found id: ""
	I1222 01:51:25.734452 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.734460 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:25.734480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:25.734536 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:25.764253 2149446 cri.go:96] found id: ""
	I1222 01:51:25.764349 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.764381 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:25.764416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:25.764515 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:25.791966 2149446 cri.go:96] found id: ""
	I1222 01:51:25.792042 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.792076 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:25.792100 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:25.792193 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:25.820057 2149446 cri.go:96] found id: ""
	I1222 01:51:25.820132 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.820167 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:25.820191 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:25.820279 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:25.854439 2149446 cri.go:96] found id: ""
	I1222 01:51:25.854514 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.854537 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:25.854556 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:25.854661 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:25.887808 2149446 cri.go:96] found id: ""
	I1222 01:51:25.887884 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.887905 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:25.887919 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:25.887996 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:25.922356 2149446 cri.go:96] found id: ""
	I1222 01:51:25.922380 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.922389 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:25.922398 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:25.922443 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:25.987720 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:25.987760 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:26.005813 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:26.005849 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:26.075026 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:26.066017    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.066714    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.068425    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.069221    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.070982    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:26.066017    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.066714    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.068425    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.069221    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.070982    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:26.075049 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:26.075061 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:26.105526 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:26.105564 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:28.644504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:28.655061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:28.655129 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:28.679454 2149446 cri.go:96] found id: ""
	I1222 01:51:28.679477 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.679486 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:28.679492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:28.679551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:28.708993 2149446 cri.go:96] found id: ""
	I1222 01:51:28.709018 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.709027 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:28.709033 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:28.709095 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:28.733612 2149446 cri.go:96] found id: ""
	I1222 01:51:28.733637 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.733646 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:28.733652 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:28.733711 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:28.762256 2149446 cri.go:96] found id: ""
	I1222 01:51:28.762285 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.762294 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:28.762300 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:28.762359 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:28.788169 2149446 cri.go:96] found id: ""
	I1222 01:51:28.788193 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.788201 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:28.788207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:28.788266 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:28.816745 2149446 cri.go:96] found id: ""
	I1222 01:51:28.816771 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.816780 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:28.816787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:28.816852 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:28.845364 2149446 cri.go:96] found id: ""
	I1222 01:51:28.845392 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.845401 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:28.845407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:28.845469 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:28.881787 2149446 cri.go:96] found id: ""
	I1222 01:51:28.881818 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.881827 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:28.881836 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:28.881855 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:28.914780 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:28.914845 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:28.954392 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:28.954419 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:29.021327 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:29.021365 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:29.037773 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:29.037801 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:29.100009 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:29.091767    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.092439    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.093858    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.094561    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.096086    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:29.091767    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.092439    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.093858    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.094561    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.096086    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:31.600320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:31.612534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:31.612605 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:31.639353 2149446 cri.go:96] found id: ""
	I1222 01:51:31.639376 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.639384 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:31.639391 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:31.639449 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:31.664903 2149446 cri.go:96] found id: ""
	I1222 01:51:31.664927 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.664936 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:31.664942 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:31.665003 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:31.694601 2149446 cri.go:96] found id: ""
	I1222 01:51:31.694626 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.694635 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:31.694642 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:31.694700 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:31.719433 2149446 cri.go:96] found id: ""
	I1222 01:51:31.719476 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.719485 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:31.719493 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:31.719573 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:31.744350 2149446 cri.go:96] found id: ""
	I1222 01:51:31.744379 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.744388 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:31.744395 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:31.744457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:31.768144 2149446 cri.go:96] found id: ""
	I1222 01:51:31.768171 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.768180 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:31.768187 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:31.768246 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:31.793170 2149446 cri.go:96] found id: ""
	I1222 01:51:31.793195 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.793204 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:31.793211 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:31.793269 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:31.817469 2149446 cri.go:96] found id: ""
	I1222 01:51:31.817495 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.817510 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:31.817520 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:31.817531 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:31.833979 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:31.834009 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:31.921313 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:31.912648    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.913191    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.914772    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.915315    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.916842    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:31.912648    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.913191    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.914772    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.915315    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.916842    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:31.921411 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:31.921424 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:31.950732 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:31.950766 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:31.979280 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:31.979390 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:34.551921 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:34.562702 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:34.562773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:34.587409 2149446 cri.go:96] found id: ""
	I1222 01:51:34.587437 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.587446 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:34.587453 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:34.587512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:34.615168 2149446 cri.go:96] found id: ""
	I1222 01:51:34.615194 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.615203 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:34.615210 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:34.615270 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:34.640950 2149446 cri.go:96] found id: ""
	I1222 01:51:34.640975 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.640983 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:34.640990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:34.641070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:34.666882 2149446 cri.go:96] found id: ""
	I1222 01:51:34.666908 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.666917 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:34.666924 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:34.667003 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:34.691815 2149446 cri.go:96] found id: ""
	I1222 01:51:34.691841 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.691850 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:34.691856 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:34.691935 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:34.716718 2149446 cri.go:96] found id: ""
	I1222 01:51:34.716744 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.716753 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:34.716760 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:34.716819 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:34.741533 2149446 cri.go:96] found id: ""
	I1222 01:51:34.741557 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.741565 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:34.741572 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:34.741638 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:34.766353 2149446 cri.go:96] found id: ""
	I1222 01:51:34.766378 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.766387 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:34.766398 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:34.766410 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:34.831354 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:34.831392 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:34.849488 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:34.849517 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:34.936942 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:34.926834    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.927637    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929290    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929888    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.931562    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:34.926834    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.927637    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929290    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929888    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.931562    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:34.936964 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:34.936976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:34.966069 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:34.966106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:37.498453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:37.509387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:37.509460 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:37.536333 2149446 cri.go:96] found id: ""
	I1222 01:51:37.536418 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.536440 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:37.536458 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:37.536564 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:37.561738 2149446 cri.go:96] found id: ""
	I1222 01:51:37.561761 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.561770 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:37.561777 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:37.561835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:37.589068 2149446 cri.go:96] found id: ""
	I1222 01:51:37.589103 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.589112 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:37.589118 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:37.589187 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:37.613720 2149446 cri.go:96] found id: ""
	I1222 01:51:37.613745 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.613753 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:37.613760 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:37.613828 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:37.638911 2149446 cri.go:96] found id: ""
	I1222 01:51:37.638936 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.638944 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:37.638950 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:37.639030 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:37.664184 2149446 cri.go:96] found id: ""
	I1222 01:51:37.664209 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.664218 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:37.664224 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:37.664336 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:37.689816 2149446 cri.go:96] found id: ""
	I1222 01:51:37.689840 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.689849 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:37.689855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:37.689915 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:37.714864 2149446 cri.go:96] found id: ""
	I1222 01:51:37.714888 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.714896 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:37.714905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:37.714919 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:37.731696 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:37.731722 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:37.802911 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:37.789256    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.794884    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.795605    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.797500    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.798024    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:37.789256    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.794884    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.795605    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.797500    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.798024    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:37.802930 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:37.802944 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:37.831970 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:37.832005 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:37.864010 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:37.864044 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:40.445669 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:40.458164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:40.458231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:40.484132 2149446 cri.go:96] found id: ""
	I1222 01:51:40.484159 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.484167 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:40.484173 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:40.484231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:40.510161 2149446 cri.go:96] found id: ""
	I1222 01:51:40.510187 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.510195 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:40.510201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:40.510263 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:40.537903 2149446 cri.go:96] found id: ""
	I1222 01:51:40.537929 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.537938 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:40.537944 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:40.538008 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:40.569673 2149446 cri.go:96] found id: ""
	I1222 01:51:40.569699 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.569707 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:40.569714 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:40.569773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:40.594097 2149446 cri.go:96] found id: ""
	I1222 01:51:40.594122 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.594130 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:40.594137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:40.594199 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:40.619442 2149446 cri.go:96] found id: ""
	I1222 01:51:40.619468 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.619477 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:40.619486 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:40.619545 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:40.644458 2149446 cri.go:96] found id: ""
	I1222 01:51:40.644485 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.644494 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:40.644500 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:40.644559 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:40.669598 2149446 cri.go:96] found id: ""
	I1222 01:51:40.669625 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.669634 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:40.669642 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:40.669655 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:40.686418 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:40.686488 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:40.747161 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:40.739409    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.739815    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.740969    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.741303    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.742747    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:40.739409    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.739815    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.740969    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.741303    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.742747    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:40.747183 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:40.747195 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:40.774671 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:40.774703 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:40.806536 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:40.806568 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:43.371926 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:43.382998 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:43.383098 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:43.418960 2149446 cri.go:96] found id: ""
	I1222 01:51:43.419037 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.419060 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:43.419081 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:43.419145 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:43.447476 2149446 cri.go:96] found id: ""
	I1222 01:51:43.447502 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.447511 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:43.447517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:43.447586 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:43.474323 2149446 cri.go:96] found id: ""
	I1222 01:51:43.474357 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.474367 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:43.474374 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:43.474448 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:43.500245 2149446 cri.go:96] found id: ""
	I1222 01:51:43.500323 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.500371 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:43.500390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:43.500484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:43.526338 2149446 cri.go:96] found id: ""
	I1222 01:51:43.526361 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.526369 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:43.526375 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:43.526434 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:43.551187 2149446 cri.go:96] found id: ""
	I1222 01:51:43.551211 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.551220 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:43.551227 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:43.551288 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:43.576838 2149446 cri.go:96] found id: ""
	I1222 01:51:43.576861 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.576870 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:43.576877 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:43.576939 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:43.603211 2149446 cri.go:96] found id: ""
	I1222 01:51:43.603283 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.603303 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:43.603326 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:43.603363 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:43.671168 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:43.671202 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:43.688181 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:43.688252 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:43.752146 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:43.744254    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.744955    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746431    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746780    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.748199    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:43.744254    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.744955    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746431    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746780    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.748199    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:43.752167 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:43.752178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:43.780617 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:43.780654 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:46.308524 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:46.318956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:46.319027 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:46.346481 2149446 cri.go:96] found id: ""
	I1222 01:51:46.346508 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.346517 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:46.346523 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:46.346582 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:46.375577 2149446 cri.go:96] found id: ""
	I1222 01:51:46.375603 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.375612 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:46.375618 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:46.375681 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:46.410693 2149446 cri.go:96] found id: ""
	I1222 01:51:46.410721 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.410730 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:46.410736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:46.410805 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:46.437367 2149446 cri.go:96] found id: ""
	I1222 01:51:46.437391 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.437400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:46.437407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:46.437466 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:46.463337 2149446 cri.go:96] found id: ""
	I1222 01:51:46.463364 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.463373 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:46.463379 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:46.463436 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:46.487509 2149446 cri.go:96] found id: ""
	I1222 01:51:46.487532 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.487540 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:46.487547 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:46.487609 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:46.513965 2149446 cri.go:96] found id: ""
	I1222 01:51:46.513989 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.513998 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:46.514037 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:46.514111 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:46.539974 2149446 cri.go:96] found id: ""
	I1222 01:51:46.539999 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.540008 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:46.540017 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:46.540032 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:46.604304 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:46.604345 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:46.620864 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:46.620894 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:46.682614 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:46.674017    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.674824    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676316    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676738    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.678200    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:46.674017    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.674824    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676316    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676738    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.678200    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:46.682639 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:46.682652 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:46.711167 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:46.711201 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:49.246342 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:49.256853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:49.256925 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:49.285523 2149446 cri.go:96] found id: ""
	I1222 01:51:49.285548 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.285557 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:49.285563 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:49.285621 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:49.314893 2149446 cri.go:96] found id: ""
	I1222 01:51:49.314915 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.314923 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:49.314930 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:49.314988 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:49.346252 2149446 cri.go:96] found id: ""
	I1222 01:51:49.346281 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.346290 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:49.346297 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:49.346353 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:49.380477 2149446 cri.go:96] found id: ""
	I1222 01:51:49.380505 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.380514 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:49.380521 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:49.380590 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:49.414563 2149446 cri.go:96] found id: ""
	I1222 01:51:49.414586 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.414595 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:49.414601 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:49.414660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:49.439047 2149446 cri.go:96] found id: ""
	I1222 01:51:49.439072 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.439081 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:49.439088 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:49.439171 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:49.465565 2149446 cri.go:96] found id: ""
	I1222 01:51:49.465592 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.465601 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:49.465607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:49.465685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:49.489816 2149446 cri.go:96] found id: ""
	I1222 01:51:49.489838 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.489847 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:49.489856 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:49.489871 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:49.555234 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:49.555269 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:49.573085 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:49.573124 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:49.640369 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:49.632082    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.632738    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634246    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634633    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.636132    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:49.632082    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.632738    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634246    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634633    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.636132    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:49.640390 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:49.640402 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:49.668820 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:49.668855 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:52.197481 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:52.208164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:52.208283 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:52.240218 2149446 cri.go:96] found id: ""
	I1222 01:51:52.240244 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.240252 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:52.240258 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:52.240325 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:52.267476 2149446 cri.go:96] found id: ""
	I1222 01:51:52.267503 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.267512 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:52.267518 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:52.267588 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:52.292286 2149446 cri.go:96] found id: ""
	I1222 01:51:52.292310 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.292319 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:52.292332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:52.292395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:52.320140 2149446 cri.go:96] found id: ""
	I1222 01:51:52.320164 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.320173 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:52.320179 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:52.320256 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:52.353854 2149446 cri.go:96] found id: ""
	I1222 01:51:52.353888 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.353897 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:52.353903 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:52.354001 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:52.383818 2149446 cri.go:96] found id: ""
	I1222 01:51:52.383851 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.383860 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:52.383867 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:52.383958 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:52.418591 2149446 cri.go:96] found id: ""
	I1222 01:51:52.418616 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.418624 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:52.418631 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:52.418714 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:52.449289 2149446 cri.go:96] found id: ""
	I1222 01:51:52.449383 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.449407 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:52.449430 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:52.449461 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:52.525065 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:52.525107 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:52.543910 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:52.543942 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:52.607708 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:52.599699    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.600218    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.601875    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.602334    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.603935    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:52.599699    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.600218    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.601875    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.602334    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.603935    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:52.607772 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:52.607790 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:52.636841 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:52.636879 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:55.167236 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:55.178565 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:55.178664 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:55.207461 2149446 cri.go:96] found id: ""
	I1222 01:51:55.207486 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.207501 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:55.207507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:55.207570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:55.232542 2149446 cri.go:96] found id: ""
	I1222 01:51:55.232615 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.232638 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:55.232657 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:55.232741 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:55.258972 2149446 cri.go:96] found id: ""
	I1222 01:51:55.258999 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.259008 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:55.259016 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:55.259078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:55.285371 2149446 cri.go:96] found id: ""
	I1222 01:51:55.285397 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.285406 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:55.285413 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:55.285484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:55.313304 2149446 cri.go:96] found id: ""
	I1222 01:51:55.313354 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.313363 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:55.313370 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:55.313439 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:55.347910 2149446 cri.go:96] found id: ""
	I1222 01:51:55.347940 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.347948 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:55.347955 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:55.348028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:55.381847 2149446 cri.go:96] found id: ""
	I1222 01:51:55.381869 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.381877 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:55.381884 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:55.381948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:55.418137 2149446 cri.go:96] found id: ""
	I1222 01:51:55.418161 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.418169 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:55.418179 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:55.418192 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:55.484714 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:55.484752 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:55.501483 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:55.501575 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:55.578456 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:55.569912    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.570649    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572190    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572838    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.574290    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:55.569912    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.570649    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572190    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572838    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.574290    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:55.578476 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:55.578487 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:55.606594 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:55.606627 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:58.138273 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:58.149164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:58.149233 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:58.176492 2149446 cri.go:96] found id: ""
	I1222 01:51:58.176517 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.176525 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:58.176532 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:58.176592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:58.200700 2149446 cri.go:96] found id: ""
	I1222 01:51:58.200724 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.200733 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:58.200740 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:58.200799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:58.227476 2149446 cri.go:96] found id: ""
	I1222 01:51:58.227501 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.227511 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:58.227517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:58.227582 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:58.252693 2149446 cri.go:96] found id: ""
	I1222 01:51:58.252720 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.252728 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:58.252736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:58.252801 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:58.277812 2149446 cri.go:96] found id: ""
	I1222 01:51:58.277835 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.277843 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:58.277850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:58.277910 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:58.303365 2149446 cri.go:96] found id: ""
	I1222 01:51:58.303400 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.303409 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:58.303432 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:58.303509 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:58.328480 2149446 cri.go:96] found id: ""
	I1222 01:51:58.328504 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.328513 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:58.328520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:58.328578 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:58.359399 2149446 cri.go:96] found id: ""
	I1222 01:51:58.359426 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.359436 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:58.359445 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:58.359456 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:58.391493 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:58.391529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:58.429463 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:58.429487 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:58.494632 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:58.494667 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:58.511259 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:58.511294 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:58.572875 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:58.564819    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.565262    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566536    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566986    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.568425    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:58.564819    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.565262    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566536    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566986    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.568425    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:01.073148 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:01.084069 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:01.084185 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:01.116512 2149446 cri.go:96] found id: ""
	I1222 01:52:01.116590 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.116626 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:01.116635 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:01.116729 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:01.155908 2149446 cri.go:96] found id: ""
	I1222 01:52:01.155940 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.155948 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:01.155956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:01.156039 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:01.190906 2149446 cri.go:96] found id: ""
	I1222 01:52:01.190992 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.191015 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:01.191036 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:01.191159 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:01.228209 2149446 cri.go:96] found id: ""
	I1222 01:52:01.228284 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.228300 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:01.228311 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:01.228390 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:01.258097 2149446 cri.go:96] found id: ""
	I1222 01:52:01.258122 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.258131 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:01.258138 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:01.258257 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:01.288999 2149446 cri.go:96] found id: ""
	I1222 01:52:01.289026 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.289035 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:01.289042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:01.289107 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:01.318779 2149446 cri.go:96] found id: ""
	I1222 01:52:01.318806 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.318817 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:01.318825 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:01.318896 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:01.354526 2149446 cri.go:96] found id: ""
	I1222 01:52:01.354551 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.354560 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:01.354568 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:01.354580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:01.377079 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:01.377178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:01.467471 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:01.456482    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.457416    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.459443    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.460466    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.462034    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:01.456482    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.457416    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.459443    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.460466    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.462034    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:01.467498 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:01.467512 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:01.497872 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:01.497924 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:01.538392 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:01.538422 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:04.105458 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:04.117694 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:04.117772 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:04.145575 2149446 cri.go:96] found id: ""
	I1222 01:52:04.145610 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.145618 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:04.145624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:04.145682 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:04.175381 2149446 cri.go:96] found id: ""
	I1222 01:52:04.175407 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.175416 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:04.175422 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:04.175501 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:04.201486 2149446 cri.go:96] found id: ""
	I1222 01:52:04.201511 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.201520 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:04.201526 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:04.201627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:04.226750 2149446 cri.go:96] found id: ""
	I1222 01:52:04.226776 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.226785 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:04.226791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:04.226868 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:04.254116 2149446 cri.go:96] found id: ""
	I1222 01:52:04.254180 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.254194 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:04.254201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:04.254260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:04.279876 2149446 cri.go:96] found id: ""
	I1222 01:52:04.279900 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.279909 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:04.279916 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:04.279979 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:04.305403 2149446 cri.go:96] found id: ""
	I1222 01:52:04.305425 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.305433 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:04.305439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:04.305500 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:04.334607 2149446 cri.go:96] found id: ""
	I1222 01:52:04.334633 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.334641 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:04.334650 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:04.334682 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:04.373276 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:04.373306 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:04.446848 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:04.446882 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:04.464981 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:04.465008 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:04.530912 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:04.522524    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.523282    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.524828    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.525152    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.526604    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:04.522524    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.523282    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.524828    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.525152    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.526604    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:04.530931 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:04.530943 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:07.063151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:07.073864 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:07.073937 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:07.099937 2149446 cri.go:96] found id: ""
	I1222 01:52:07.099966 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.099975 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:07.099981 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:07.100043 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:07.127907 2149446 cri.go:96] found id: ""
	I1222 01:52:07.127934 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.127943 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:07.127949 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:07.128009 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:07.162965 2149446 cri.go:96] found id: ""
	I1222 01:52:07.162991 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.162999 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:07.163006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:07.163067 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:07.192890 2149446 cri.go:96] found id: ""
	I1222 01:52:07.192916 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.192925 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:07.192932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:07.192994 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:07.221463 2149446 cri.go:96] found id: ""
	I1222 01:52:07.221495 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.221504 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:07.221512 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:07.221572 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:07.247489 2149446 cri.go:96] found id: ""
	I1222 01:52:07.247517 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.247527 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:07.247535 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:07.247600 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:07.272728 2149446 cri.go:96] found id: ""
	I1222 01:52:07.272755 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.272764 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:07.272771 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:07.272849 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:07.297364 2149446 cri.go:96] found id: ""
	I1222 01:52:07.297389 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.297398 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:07.297407 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:07.297418 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:07.325214 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:07.325251 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:07.360148 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:07.360177 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:07.432261 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:07.432295 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:07.450157 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:07.450186 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:07.516409 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:07.507778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.508648    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510237    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.512438    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:07.507778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.508648    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510237    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.512438    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:10.017883 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:10.030135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:10.030223 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:10.060521 2149446 cri.go:96] found id: ""
	I1222 01:52:10.060547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.060556 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:10.060563 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:10.060627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:10.088074 2149446 cri.go:96] found id: ""
	I1222 01:52:10.088112 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.088124 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:10.088133 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:10.088200 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:10.130279 2149446 cri.go:96] found id: ""
	I1222 01:52:10.130306 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.130315 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:10.130321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:10.130387 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:10.163042 2149446 cri.go:96] found id: ""
	I1222 01:52:10.163067 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.163078 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:10.163085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:10.163147 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:10.190658 2149446 cri.go:96] found id: ""
	I1222 01:52:10.190684 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.190693 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:10.190700 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:10.190763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:10.218153 2149446 cri.go:96] found id: ""
	I1222 01:52:10.218181 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.218189 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:10.218197 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:10.218261 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:10.244623 2149446 cri.go:96] found id: ""
	I1222 01:52:10.244647 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.244656 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:10.244662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:10.244755 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:10.270908 2149446 cri.go:96] found id: ""
	I1222 01:52:10.270931 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.270939 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:10.270948 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:10.270959 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:10.299810 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:10.299846 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:10.331687 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:10.331716 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:10.407109 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:10.407148 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:10.425255 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:10.425286 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:10.506189 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:10.497058    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.497925    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.499600    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.500185    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.501961    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:10.497058    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.497925    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.499600    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.500185    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.501961    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:13.007836 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:13.019282 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:13.019388 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:13.045528 2149446 cri.go:96] found id: ""
	I1222 01:52:13.045553 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.045562 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:13.045568 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:13.045631 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:13.071460 2149446 cri.go:96] found id: ""
	I1222 01:52:13.071528 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.071542 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:13.071549 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:13.071623 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:13.103717 2149446 cri.go:96] found id: ""
	I1222 01:52:13.103749 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.103758 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:13.103765 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:13.103835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:13.133730 2149446 cri.go:96] found id: ""
	I1222 01:52:13.133757 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.133772 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:13.133787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:13.133846 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:13.170719 2149446 cri.go:96] found id: ""
	I1222 01:52:13.170799 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.170841 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:13.170860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:13.170948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:13.197725 2149446 cri.go:96] found id: ""
	I1222 01:52:13.197801 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.197823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:13.197842 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:13.197926 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:13.224209 2149446 cri.go:96] found id: ""
	I1222 01:52:13.224404 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.224492 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:13.224517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:13.224639 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:13.249315 2149446 cri.go:96] found id: ""
	I1222 01:52:13.249353 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.249362 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:13.249371 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:13.249382 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:13.280377 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:13.280404 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:13.346072 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:13.346106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:13.364038 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:13.364070 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:13.433131 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:13.424970    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.425643    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427164    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427697    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.429162    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:13.424970    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.425643    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427164    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427697    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.429162    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:13.433163 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:13.433175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:15.961525 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:15.972093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:15.972167 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:15.996879 2149446 cri.go:96] found id: ""
	I1222 01:52:15.996913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:15.996922 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:15.996929 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:15.996994 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:16.024677 2149446 cri.go:96] found id: ""
	I1222 01:52:16.024700 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.024709 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:16.024715 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:16.024777 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:16.050558 2149446 cri.go:96] found id: ""
	I1222 01:52:16.050590 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.050599 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:16.050606 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:16.050666 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:16.076374 2149446 cri.go:96] found id: ""
	I1222 01:52:16.076400 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.076409 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:16.076416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:16.076476 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:16.107298 2149446 cri.go:96] found id: ""
	I1222 01:52:16.107323 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.107331 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:16.107339 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:16.107402 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:16.139868 2149446 cri.go:96] found id: ""
	I1222 01:52:16.139895 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.139904 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:16.139911 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:16.139972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:16.170608 2149446 cri.go:96] found id: ""
	I1222 01:52:16.170631 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.170640 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:16.170646 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:16.170708 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:16.195886 2149446 cri.go:96] found id: ""
	I1222 01:52:16.195913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.195932 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:16.195941 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:16.195954 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:16.262211 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:16.262249 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:16.280423 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:16.280452 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:16.356185 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:16.346872    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.347685    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.349384    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.350021    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.351608    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:16.346872    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.347685    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.349384    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.350021    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.351608    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:16.356258 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:16.356286 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:16.391461 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:16.391538 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:18.932633 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:18.944359 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:18.944431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:18.971356 2149446 cri.go:96] found id: ""
	I1222 01:52:18.971379 2149446 logs.go:282] 0 containers: []
	W1222 01:52:18.971388 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:18.971394 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:18.971457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:18.998044 2149446 cri.go:96] found id: ""
	I1222 01:52:18.998070 2149446 logs.go:282] 0 containers: []
	W1222 01:52:18.998078 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:18.998085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:18.998149 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:19.028529 2149446 cri.go:96] found id: ""
	I1222 01:52:19.028553 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.028566 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:19.028573 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:19.028633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:19.052808 2149446 cri.go:96] found id: ""
	I1222 01:52:19.052833 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.052843 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:19.052850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:19.052910 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:19.077981 2149446 cri.go:96] found id: ""
	I1222 01:52:19.078004 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.078012 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:19.078019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:19.078080 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:19.105393 2149446 cri.go:96] found id: ""
	I1222 01:52:19.105422 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.105433 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:19.105439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:19.105498 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:19.146968 2149446 cri.go:96] found id: ""
	I1222 01:52:19.146995 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.147004 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:19.147010 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:19.147075 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:19.176941 2149446 cri.go:96] found id: ""
	I1222 01:52:19.176965 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.176974 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:19.176983 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:19.176995 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:19.246864 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:19.246904 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:19.266042 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:19.266070 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:19.330582 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:19.322391    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.323105    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.324610    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.325067    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.326613    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:19.322391    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.323105    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.324610    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.325067    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.326613    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:19.330602 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:19.330615 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:19.366422 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:19.366457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:21.896072 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:21.906705 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:21.906774 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:21.931888 2149446 cri.go:96] found id: ""
	I1222 01:52:21.931913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.931921 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:21.931928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:21.931988 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:21.959852 2149446 cri.go:96] found id: ""
	I1222 01:52:21.959925 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.959959 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:21.959984 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:21.960070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:21.984377 2149446 cri.go:96] found id: ""
	I1222 01:52:21.984402 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.984410 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:21.984423 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:21.984485 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:22.015185 2149446 cri.go:96] found id: ""
	I1222 01:52:22.015212 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.015221 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:22.015227 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:22.015290 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:22.041402 2149446 cri.go:96] found id: ""
	I1222 01:52:22.041427 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.041436 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:22.041442 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:22.041505 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:22.069753 2149446 cri.go:96] found id: ""
	I1222 01:52:22.069780 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.069790 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:22.069797 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:22.069863 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:22.100905 2149446 cri.go:96] found id: ""
	I1222 01:52:22.100941 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.100950 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:22.100956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:22.101028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:22.133734 2149446 cri.go:96] found id: ""
	I1222 01:52:22.133770 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.133778 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:22.133787 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:22.133799 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:22.219653 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:22.219690 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:22.236668 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:22.236699 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:22.306717 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:22.298969    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.299431    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301029    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301450    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.302886    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:22.298969    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.299431    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301029    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301450    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.302886    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:22.306779 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:22.306799 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:22.334960 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:22.334996 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:24.888617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:24.899592 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:24.899667 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:24.924136 2149446 cri.go:96] found id: ""
	I1222 01:52:24.924161 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.924171 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:24.924177 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:24.924237 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:24.949757 2149446 cri.go:96] found id: ""
	I1222 01:52:24.949782 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.949791 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:24.949797 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:24.949862 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:24.975441 2149446 cri.go:96] found id: ""
	I1222 01:52:24.975468 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.975476 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:24.975482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:24.975544 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:24.999474 2149446 cri.go:96] found id: ""
	I1222 01:52:24.999500 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.999509 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:24.999517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:24.999624 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:25.034630 2149446 cri.go:96] found id: ""
	I1222 01:52:25.034655 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.034663 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:25.034670 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:25.034737 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:25.060352 2149446 cri.go:96] found id: ""
	I1222 01:52:25.060375 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.060385 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:25.060392 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:25.060516 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:25.088743 2149446 cri.go:96] found id: ""
	I1222 01:52:25.088768 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.088777 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:25.088784 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:25.088845 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:25.116748 2149446 cri.go:96] found id: ""
	I1222 01:52:25.116781 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.116790 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:25.116800 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:25.116813 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:25.198386 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:25.198430 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:25.216584 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:25.216619 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:25.285946 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:25.278237    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.278629    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280115    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280453    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.281691    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:25.278237    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.278629    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280115    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280453    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.281691    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:25.285971 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:25.285986 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:25.313622 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:25.313652 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:27.852014 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:27.862934 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:27.863049 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:27.888699 2149446 cri.go:96] found id: ""
	I1222 01:52:27.888735 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.888744 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:27.888751 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:27.888822 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:27.914148 2149446 cri.go:96] found id: ""
	I1222 01:52:27.914223 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.914246 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:27.914259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:27.914337 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:27.939481 2149446 cri.go:96] found id: ""
	I1222 01:52:27.939560 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.939581 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:27.939595 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:27.939668 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:27.967633 2149446 cri.go:96] found id: ""
	I1222 01:52:27.967662 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.967671 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:27.967678 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:27.967785 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:27.991808 2149446 cri.go:96] found id: ""
	I1222 01:52:27.991889 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.991911 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:27.991932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:27.992037 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:28.023583 2149446 cri.go:96] found id: ""
	I1222 01:52:28.023622 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.023631 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:28.023638 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:28.023709 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:28.052545 2149446 cri.go:96] found id: ""
	I1222 01:52:28.052625 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.052648 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:28.052672 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:28.052763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:28.078522 2149446 cri.go:96] found id: ""
	I1222 01:52:28.078550 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.078558 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:28.078567 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:28.078580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:28.147757 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:28.147792 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:28.165530 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:28.165560 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:28.232073 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:28.223681    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.224341    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.225912    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.226397    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.228007    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:28.223681    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.224341    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.225912    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.226397    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.228007    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:28.232094 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:28.232107 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:28.260533 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:28.260569 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:30.796554 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:30.807742 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:30.807813 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:30.835334 2149446 cri.go:96] found id: ""
	I1222 01:52:30.835361 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.835370 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:30.835377 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:30.835467 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:30.861061 2149446 cri.go:96] found id: ""
	I1222 01:52:30.861101 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.861113 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:30.861121 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:30.861195 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:30.888925 2149446 cri.go:96] found id: ""
	I1222 01:52:30.888999 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.889021 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:30.889042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:30.889128 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:30.916374 2149446 cri.go:96] found id: ""
	I1222 01:52:30.916448 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.916470 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:30.916490 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:30.916583 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:30.941908 2149446 cri.go:96] found id: ""
	I1222 01:52:30.941932 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.941940 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:30.941947 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:30.942017 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:30.968265 2149446 cri.go:96] found id: ""
	I1222 01:52:30.968290 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.968298 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:30.968305 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:30.968372 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:30.995148 2149446 cri.go:96] found id: ""
	I1222 01:52:30.995174 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.995183 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:30.995189 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:30.995250 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:31.022834 2149446 cri.go:96] found id: ""
	I1222 01:52:31.022861 2149446 logs.go:282] 0 containers: []
	W1222 01:52:31.022870 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:31.022879 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:31.022891 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:31.083576 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:31.075792    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.076576    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.077772    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.078220    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.079697    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:31.075792    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.076576    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.077772    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.078220    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.079697    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:31.083600 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:31.083613 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:31.120484 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:31.120518 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:31.156273 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:31.156303 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:31.226587 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:31.226623 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:33.743333 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:33.753911 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:33.753986 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:33.779167 2149446 cri.go:96] found id: ""
	I1222 01:52:33.779192 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.779201 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:33.779207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:33.779265 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:33.804863 2149446 cri.go:96] found id: ""
	I1222 01:52:33.804892 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.804901 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:33.804913 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:33.804972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:33.830391 2149446 cri.go:96] found id: ""
	I1222 01:52:33.830414 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.830421 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:33.830428 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:33.830488 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:33.856024 2149446 cri.go:96] found id: ""
	I1222 01:52:33.856047 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.856057 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:33.856064 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:33.856121 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:33.885352 2149446 cri.go:96] found id: ""
	I1222 01:52:33.885375 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.885383 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:33.885389 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:33.885454 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:33.910795 2149446 cri.go:96] found id: ""
	I1222 01:52:33.910819 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.910827 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:33.910835 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:33.910922 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:33.937408 2149446 cri.go:96] found id: ""
	I1222 01:52:33.937435 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.937444 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:33.937451 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:33.937513 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:33.964011 2149446 cri.go:96] found id: ""
	I1222 01:52:33.964037 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.964046 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:33.964055 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:33.964069 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:34.030109 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:34.021515    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.022250    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.023865    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.024450    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.026222    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:34.021515    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.022250    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.023865    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.024450    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.026222    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:34.030135 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:34.030147 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:34.058869 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:34.058905 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:34.090878 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:34.090905 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:34.173693 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:34.173734 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:36.691703 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:36.703200 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:36.703287 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:36.729524 2149446 cri.go:96] found id: ""
	I1222 01:52:36.729547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.729556 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:36.729562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:36.729620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:36.755426 2149446 cri.go:96] found id: ""
	I1222 01:52:36.755451 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.755459 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:36.755465 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:36.755522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:36.780359 2149446 cri.go:96] found id: ""
	I1222 01:52:36.780384 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.780393 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:36.780399 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:36.780459 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:36.810342 2149446 cri.go:96] found id: ""
	I1222 01:52:36.810370 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.810380 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:36.810386 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:36.810447 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:36.836550 2149446 cri.go:96] found id: ""
	I1222 01:52:36.836586 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.836597 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:36.836619 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:36.836702 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:36.863450 2149446 cri.go:96] found id: ""
	I1222 01:52:36.863476 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.863485 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:36.863492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:36.863554 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:36.890913 2149446 cri.go:96] found id: ""
	I1222 01:52:36.890938 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.890947 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:36.890953 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:36.891014 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:36.921826 2149446 cri.go:96] found id: ""
	I1222 01:52:36.921849 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.921858 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:36.921867 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:36.921879 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:36.938824 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:36.938854 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:37.002681 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:36.993721    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.994257    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.995809    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.996244    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.997718    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:36.993721    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.994257    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.995809    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.996244    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.997718    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:37.002770 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:37.002797 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:37.038076 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:37.038111 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:37.065632 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:37.065659 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:39.635717 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:39.650457 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:39.650526 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:39.678446 2149446 cri.go:96] found id: ""
	I1222 01:52:39.678470 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.678479 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:39.678485 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:39.678543 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:39.703602 2149446 cri.go:96] found id: ""
	I1222 01:52:39.703627 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.703636 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:39.703643 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:39.703702 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:39.728380 2149446 cri.go:96] found id: ""
	I1222 01:52:39.728406 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.728415 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:39.728421 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:39.728484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:39.753237 2149446 cri.go:96] found id: ""
	I1222 01:52:39.753261 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.753269 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:39.753275 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:39.753373 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:39.778213 2149446 cri.go:96] found id: ""
	I1222 01:52:39.778244 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.778253 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:39.778259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:39.778319 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:39.808901 2149446 cri.go:96] found id: ""
	I1222 01:52:39.808925 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.808933 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:39.808940 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:39.808997 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:39.834452 2149446 cri.go:96] found id: ""
	I1222 01:52:39.834478 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.834488 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:39.834495 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:39.834556 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:39.859475 2149446 cri.go:96] found id: ""
	I1222 01:52:39.859498 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.859507 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:39.859515 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:39.859526 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:39.893788 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:39.893816 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:39.958922 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:39.958959 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:39.975346 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:39.975380 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:40.062822 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:40.052520    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.053798    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.054613    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.056904    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.057401    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:40.052520    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.053798    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.054613    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.056904    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.057401    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:40.062851 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:40.062864 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:42.596761 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:42.611989 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:42.612054 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:42.650231 2149446 cri.go:96] found id: ""
	I1222 01:52:42.650253 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.650261 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:42.650268 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:42.650326 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:42.677520 2149446 cri.go:96] found id: ""
	I1222 01:52:42.677547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.677556 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:42.677562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:42.677620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:42.702450 2149446 cri.go:96] found id: ""
	I1222 01:52:42.702478 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.702487 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:42.702493 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:42.702549 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:42.727875 2149446 cri.go:96] found id: ""
	I1222 01:52:42.727947 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.727975 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:42.727994 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:42.728107 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:42.753077 2149446 cri.go:96] found id: ""
	I1222 01:52:42.753159 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.753187 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:42.753207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:42.753360 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:42.779648 2149446 cri.go:96] found id: ""
	I1222 01:52:42.779671 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.779679 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:42.779686 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:42.779761 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:42.808099 2149446 cri.go:96] found id: ""
	I1222 01:52:42.808172 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.808181 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:42.808188 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:42.808247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:42.834339 2149446 cri.go:96] found id: ""
	I1222 01:52:42.834367 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.834376 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:42.834385 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:42.834399 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:42.899680 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:42.899717 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:42.916710 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:42.916749 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:42.983675 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:42.975196    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.975990    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.977681    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.978270    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.979868    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:42.975196    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.975990    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.977681    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.978270    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.979868    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:42.983698 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:42.983710 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:43.011581 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:43.011616 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:45.543456 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:45.553928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:45.554038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:45.578837 2149446 cri.go:96] found id: ""
	I1222 01:52:45.578860 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.578868 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:45.578874 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:45.578932 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:45.612293 2149446 cri.go:96] found id: ""
	I1222 01:52:45.612315 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.612323 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:45.612340 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:45.612407 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:45.648009 2149446 cri.go:96] found id: ""
	I1222 01:52:45.648043 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.648052 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:45.648059 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:45.648157 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:45.675586 2149446 cri.go:96] found id: ""
	I1222 01:52:45.675609 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.675618 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:45.675624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:45.675686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:45.702507 2149446 cri.go:96] found id: ""
	I1222 01:52:45.702576 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.702597 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:45.702610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:45.702686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:45.729853 2149446 cri.go:96] found id: ""
	I1222 01:52:45.729921 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.729943 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:45.729964 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:45.730047 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:45.760311 2149446 cri.go:96] found id: ""
	I1222 01:52:45.760387 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.760420 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:45.760439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:45.760522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:45.785968 2149446 cri.go:96] found id: ""
	I1222 01:52:45.785995 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.786005 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:45.786014 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:45.786027 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:45.854201 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:45.854240 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:45.871755 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:45.871787 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:45.937315 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:45.929306    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.929882    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.931547    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.932074    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.933637    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:45.929306    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.929882    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.931547    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.932074    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.933637    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:45.937355 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:45.937368 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:45.966159 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:45.966197 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:48.499646 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:48.510238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:48.510330 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:48.536889 2149446 cri.go:96] found id: ""
	I1222 01:52:48.536923 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.536932 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:48.536939 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:48.537006 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:48.563218 2149446 cri.go:96] found id: ""
	I1222 01:52:48.563284 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.563305 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:48.563317 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:48.563393 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:48.588527 2149446 cri.go:96] found id: ""
	I1222 01:52:48.588561 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.588570 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:48.588576 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:48.588643 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:48.621974 2149446 cri.go:96] found id: ""
	I1222 01:52:48.622045 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.622080 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:48.622106 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:48.622197 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:48.654895 2149446 cri.go:96] found id: ""
	I1222 01:52:48.654970 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.654992 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:48.655014 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:48.655102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:48.692809 2149446 cri.go:96] found id: ""
	I1222 01:52:48.692885 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.692911 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:48.692931 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:48.693056 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:48.717194 2149446 cri.go:96] found id: ""
	I1222 01:52:48.717272 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.717294 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:48.717314 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:48.717425 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:48.742591 2149446 cri.go:96] found id: ""
	I1222 01:52:48.742616 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.742625 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:48.742633 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:48.742663 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:48.801798 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:48.794377    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.794846    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796207    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796539    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.798000    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:48.794377    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.794846    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796207    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796539    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.798000    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:48.801831 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:48.801843 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:48.828911 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:48.828944 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:48.860768 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:48.860798 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:48.925684 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:48.925719 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:51.443762 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:51.454909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:51.454993 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:51.484486 2149446 cri.go:96] found id: ""
	I1222 01:52:51.484511 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.484520 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:51.484526 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:51.484590 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:51.510790 2149446 cri.go:96] found id: ""
	I1222 01:52:51.510814 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.510822 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:51.510828 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:51.510886 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:51.537007 2149446 cri.go:96] found id: ""
	I1222 01:52:51.537030 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.537039 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:51.537050 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:51.537110 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:51.563454 2149446 cri.go:96] found id: ""
	I1222 01:52:51.563479 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.563487 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:51.563494 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:51.563551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:51.590407 2149446 cri.go:96] found id: ""
	I1222 01:52:51.590433 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.590442 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:51.590448 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:51.590512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:51.626498 2149446 cri.go:96] found id: ""
	I1222 01:52:51.626525 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.626533 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:51.626540 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:51.626604 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:51.653575 2149446 cri.go:96] found id: ""
	I1222 01:52:51.653601 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.653610 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:51.653616 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:51.653676 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:51.686395 2149446 cri.go:96] found id: ""
	I1222 01:52:51.686425 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.686435 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:51.686444 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:51.686457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:51.756701 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:51.756737 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:51.773423 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:51.773525 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:51.837429 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:51.829362    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.830109    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.831677    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.832019    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.833581    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:51.829362    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.830109    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.831677    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.832019    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.833581    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:51.837453 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:51.837467 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:51.870792 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:51.870830 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:54.401483 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:54.411791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:54.411865 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:54.439704 2149446 cri.go:96] found id: ""
	I1222 01:52:54.439729 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.439737 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:54.439743 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:54.439811 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:54.464997 2149446 cri.go:96] found id: ""
	I1222 01:52:54.465020 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.465029 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:54.465047 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:54.465112 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:54.489955 2149446 cri.go:96] found id: ""
	I1222 01:52:54.489979 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.489988 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:54.489994 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:54.490051 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:54.520365 2149446 cri.go:96] found id: ""
	I1222 01:52:54.520390 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.520400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:54.520406 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:54.520463 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:54.546850 2149446 cri.go:96] found id: ""
	I1222 01:52:54.546875 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.546883 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:54.546890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:54.546971 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:54.572315 2149446 cri.go:96] found id: ""
	I1222 01:52:54.572353 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.572363 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:54.572385 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:54.572466 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:54.606370 2149446 cri.go:96] found id: ""
	I1222 01:52:54.606405 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.606414 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:54.606421 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:54.606494 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:54.640572 2149446 cri.go:96] found id: ""
	I1222 01:52:54.640598 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.640607 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:54.640616 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:54.640628 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:54.708604 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:54.700120    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.700711    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.702391    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.703169    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.704748    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:54.700120    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.700711    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.702391    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.703169    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.704748    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:54.708624 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:54.708637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:54.737164 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:54.737199 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:54.768207 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:54.768291 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:54.833444 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:54.833481 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:57.350685 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:57.362041 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:57.362120 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:57.395509 2149446 cri.go:96] found id: ""
	I1222 01:52:57.395533 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.395542 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:57.395548 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:57.395606 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:57.422756 2149446 cri.go:96] found id: ""
	I1222 01:52:57.422778 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.422787 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:57.422794 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:57.422856 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:57.449198 2149446 cri.go:96] found id: ""
	I1222 01:52:57.449274 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.449297 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:57.449332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:57.449410 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:57.475089 2149446 cri.go:96] found id: ""
	I1222 01:52:57.475116 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.475136 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:57.475142 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:57.475239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:57.500721 2149446 cri.go:96] found id: ""
	I1222 01:52:57.500747 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.500755 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:57.500763 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:57.500823 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:57.526256 2149446 cri.go:96] found id: ""
	I1222 01:52:57.526283 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.526292 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:57.526299 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:57.526361 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:57.551020 2149446 cri.go:96] found id: ""
	I1222 01:52:57.551098 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.551122 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:57.551135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:57.551251 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:57.578237 2149446 cri.go:96] found id: ""
	I1222 01:52:57.578264 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.578272 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:57.578281 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:57.578293 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:57.647976 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:57.648013 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:57.669609 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:57.669637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:57.736734 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:57.727886    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.728902    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730342    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730664    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.732062    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:57.727886    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.728902    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730342    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730664    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.732062    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:57.736758 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:57.736777 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:57.765104 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:57.765143 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:00.296700 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:00.314134 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:00.314362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:00.360664 2149446 cri.go:96] found id: ""
	I1222 01:53:00.360694 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.360703 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:00.360709 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:00.360787 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:00.400391 2149446 cri.go:96] found id: ""
	I1222 01:53:00.400434 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.400447 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:00.400458 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:00.400534 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:00.437859 2149446 cri.go:96] found id: ""
	I1222 01:53:00.437883 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.437892 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:00.437899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:00.437960 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:00.468446 2149446 cri.go:96] found id: ""
	I1222 01:53:00.468514 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.468536 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:00.468555 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:00.468641 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:00.494431 2149446 cri.go:96] found id: ""
	I1222 01:53:00.494462 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.494471 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:00.494486 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:00.494564 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:00.520633 2149446 cri.go:96] found id: ""
	I1222 01:53:00.520660 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.520669 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:00.520675 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:00.520735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:00.552618 2149446 cri.go:96] found id: ""
	I1222 01:53:00.552646 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.552655 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:00.552661 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:00.552721 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:00.578469 2149446 cri.go:96] found id: ""
	I1222 01:53:00.578492 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.578500 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:00.578509 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:00.578523 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:00.650905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:00.650939 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:00.669192 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:00.669220 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:00.732631 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:00.724784    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.725580    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727143    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727454    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.728888    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:00.724784    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.725580    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727143    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727454    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.728888    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:00.732650 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:00.732662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:00.761586 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:00.761617 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:03.289824 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:03.300567 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:03.300641 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:03.326667 2149446 cri.go:96] found id: ""
	I1222 01:53:03.326690 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.326698 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:03.326705 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:03.326763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:03.363614 2149446 cri.go:96] found id: ""
	I1222 01:53:03.363637 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.363652 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:03.363662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:03.363743 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:03.391010 2149446 cri.go:96] found id: ""
	I1222 01:53:03.391033 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.391047 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:03.391054 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:03.391126 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:03.419391 2149446 cri.go:96] found id: ""
	I1222 01:53:03.419418 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.419426 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:03.419433 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:03.419494 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:03.445436 2149446 cri.go:96] found id: ""
	I1222 01:53:03.445461 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.445470 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:03.445476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:03.445534 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:03.475787 2149446 cri.go:96] found id: ""
	I1222 01:53:03.475812 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.475821 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:03.475827 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:03.475894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:03.502216 2149446 cri.go:96] found id: ""
	I1222 01:53:03.502247 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.502256 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:03.502262 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:03.502323 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:03.532273 2149446 cri.go:96] found id: ""
	I1222 01:53:03.532294 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.532303 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:03.532312 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:03.532323 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:03.548937 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:03.548964 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:03.633734 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:03.626155    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.626558    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628013    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628303    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.629725    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:03.626155    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.626558    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628013    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628303    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.629725    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:03.633808 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:03.633837 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:03.665918 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:03.665950 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:03.698908 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:03.698933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:06.272029 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:06.282999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:06.283072 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:06.309858 2149446 cri.go:96] found id: ""
	I1222 01:53:06.309882 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.309890 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:06.309896 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:06.309954 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:06.334809 2149446 cri.go:96] found id: ""
	I1222 01:53:06.334837 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.334845 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:06.334852 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:06.334922 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:06.366780 2149446 cri.go:96] found id: ""
	I1222 01:53:06.366807 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.366816 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:06.366823 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:06.366884 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:06.396941 2149446 cri.go:96] found id: ""
	I1222 01:53:06.397018 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.397040 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:06.397057 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:06.397162 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:06.424268 2149446 cri.go:96] found id: ""
	I1222 01:53:06.424294 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.424302 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:06.424309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:06.424384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:06.451699 2149446 cri.go:96] found id: ""
	I1222 01:53:06.451723 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.451732 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:06.451739 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:06.451799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:06.480588 2149446 cri.go:96] found id: ""
	I1222 01:53:06.480610 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.480618 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:06.480624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:06.480685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:06.508393 2149446 cri.go:96] found id: ""
	I1222 01:53:06.508419 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.508428 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:06.508437 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:06.508449 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:06.525070 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:06.525101 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:06.590361 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:06.581622    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.582460    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.583922    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.584374    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.586045    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:06.581622    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.582460    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.583922    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.584374    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.586045    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:06.590383 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:06.590398 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:06.624081 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:06.624115 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:06.663556 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:06.663585 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:09.229815 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:09.240655 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:09.240737 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:09.265982 2149446 cri.go:96] found id: ""
	I1222 01:53:09.266004 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.266013 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:09.266019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:09.266078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:09.291816 2149446 cri.go:96] found id: ""
	I1222 01:53:09.291841 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.291849 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:09.291855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:09.291914 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:09.321089 2149446 cri.go:96] found id: ""
	I1222 01:53:09.321114 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.321123 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:09.321129 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:09.321187 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:09.351795 2149446 cri.go:96] found id: ""
	I1222 01:53:09.351824 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.351833 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:09.351839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:09.351897 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:09.386800 2149446 cri.go:96] found id: ""
	I1222 01:53:09.386828 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.386837 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:09.386844 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:09.386902 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:09.416549 2149446 cri.go:96] found id: ""
	I1222 01:53:09.416574 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.416582 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:09.416589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:09.416648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:09.442626 2149446 cri.go:96] found id: ""
	I1222 01:53:09.442649 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.442662 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:09.442671 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:09.442770 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:09.468113 2149446 cri.go:96] found id: ""
	I1222 01:53:09.468187 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.468209 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:09.468230 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:09.468270 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:09.531958 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:09.523337    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.523854    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.525572    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.526159    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.527780    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:09.523337    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.523854    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.525572    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.526159    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.527780    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:09.531982 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:09.531994 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:09.559710 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:09.559743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:09.601932 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:09.602030 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:09.676284 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:09.676317 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:12.195759 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:12.206480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:12.206553 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:12.235418 2149446 cri.go:96] found id: ""
	I1222 01:53:12.235491 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.235513 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:12.235527 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:12.235611 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:12.261441 2149446 cri.go:96] found id: ""
	I1222 01:53:12.261467 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.261476 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:12.261482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:12.261551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:12.291019 2149446 cri.go:96] found id: ""
	I1222 01:53:12.291044 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.291053 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:12.291059 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:12.291117 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:12.317108 2149446 cri.go:96] found id: ""
	I1222 01:53:12.317132 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.317152 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:12.317160 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:12.317222 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:12.345136 2149446 cri.go:96] found id: ""
	I1222 01:53:12.345164 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.345173 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:12.345180 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:12.345239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:12.372959 2149446 cri.go:96] found id: ""
	I1222 01:53:12.372991 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.373000 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:12.373006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:12.373076 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:12.413013 2149446 cri.go:96] found id: ""
	I1222 01:53:12.413049 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.413058 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:12.413065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:12.413134 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:12.439197 2149446 cri.go:96] found id: ""
	I1222 01:53:12.439225 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.439235 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:12.439245 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:12.439256 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:12.469256 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:12.469285 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:12.534711 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:12.534748 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:12.551566 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:12.551637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:12.624758 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:12.616140    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.617087    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.618606    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.619192    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.620695    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:12.616140    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.617087    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.618606    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.619192    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.620695    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:12.624828 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:12.624858 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:15.157013 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:15.168255 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:15.168328 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:15.195462 2149446 cri.go:96] found id: ""
	I1222 01:53:15.195486 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.195494 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:15.195501 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:15.195560 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:15.227170 2149446 cri.go:96] found id: ""
	I1222 01:53:15.227198 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.227208 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:15.227215 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:15.227282 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:15.254951 2149446 cri.go:96] found id: ""
	I1222 01:53:15.254975 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.254983 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:15.254990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:15.255073 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:15.280554 2149446 cri.go:96] found id: ""
	I1222 01:53:15.280577 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.280586 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:15.280592 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:15.280677 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:15.306166 2149446 cri.go:96] found id: ""
	I1222 01:53:15.306189 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.306198 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:15.306203 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:15.306261 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:15.331390 2149446 cri.go:96] found id: ""
	I1222 01:53:15.331414 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.331422 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:15.331429 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:15.331487 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:15.372404 2149446 cri.go:96] found id: ""
	I1222 01:53:15.372433 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.372442 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:15.372449 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:15.372512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:15.404854 2149446 cri.go:96] found id: ""
	I1222 01:53:15.404894 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.404902 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:15.404918 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:15.404930 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:15.469796 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:15.469833 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:15.487192 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:15.487222 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:15.550547 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:15.541963    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.542670    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.544415    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.545061    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.546601    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:15.541963    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.542670    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.544415    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.545061    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.546601    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:15.550568 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:15.550582 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:15.581523 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:15.581556 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:18.130208 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:18.141246 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:18.141343 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:18.168196 2149446 cri.go:96] found id: ""
	I1222 01:53:18.168219 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.168227 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:18.168234 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:18.168294 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:18.194001 2149446 cri.go:96] found id: ""
	I1222 01:53:18.194025 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.194034 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:18.194040 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:18.194101 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:18.220324 2149446 cri.go:96] found id: ""
	I1222 01:53:18.220354 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.220362 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:18.220369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:18.220427 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:18.249998 2149446 cri.go:96] found id: ""
	I1222 01:53:18.250027 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.250036 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:18.250042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:18.250102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:18.275763 2149446 cri.go:96] found id: ""
	I1222 01:53:18.275789 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.275798 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:18.275805 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:18.275863 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:18.304972 2149446 cri.go:96] found id: ""
	I1222 01:53:18.305001 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.305009 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:18.305015 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:18.305074 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:18.330117 2149446 cri.go:96] found id: ""
	I1222 01:53:18.330139 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.330147 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:18.330154 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:18.330214 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:18.368632 2149446 cri.go:96] found id: ""
	I1222 01:53:18.368663 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.368672 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:18.368680 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:18.368727 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:18.442905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:18.442946 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:18.459582 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:18.459658 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:18.528179 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:18.516076    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.516516    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.522440    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.523458    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.524496    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:18.516076    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.516516    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.522440    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.523458    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.524496    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:18.528201 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:18.528213 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:18.555680 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:18.555710 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:21.084080 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:21.094603 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:21.094678 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:21.118899 2149446 cri.go:96] found id: ""
	I1222 01:53:21.118926 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.118936 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:21.118943 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:21.119001 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:21.147861 2149446 cri.go:96] found id: ""
	I1222 01:53:21.147890 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.147898 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:21.147905 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:21.147968 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:21.172300 2149446 cri.go:96] found id: ""
	I1222 01:53:21.172328 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.172344 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:21.172350 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:21.172408 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:21.196518 2149446 cri.go:96] found id: ""
	I1222 01:53:21.196547 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.196555 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:21.196561 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:21.196621 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:21.222351 2149446 cri.go:96] found id: ""
	I1222 01:53:21.222373 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.222381 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:21.222387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:21.222445 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:21.247014 2149446 cri.go:96] found id: ""
	I1222 01:53:21.247056 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.247065 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:21.247072 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:21.247130 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:21.272507 2149446 cri.go:96] found id: ""
	I1222 01:53:21.272530 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.272538 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:21.272545 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:21.272603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:21.298865 2149446 cri.go:96] found id: ""
	I1222 01:53:21.298930 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.298955 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:21.298969 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:21.298981 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:21.375967 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:21.367288   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.368097   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.369766   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.370303   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.371978   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:21.367288   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.368097   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.369766   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.370303   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.371978   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:21.375990 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:21.376003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:21.407155 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:21.407194 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:21.439948 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:21.439978 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:21.506644 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:21.506681 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:24.027394 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:24.038461 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:24.038535 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:24.065524 2149446 cri.go:96] found id: ""
	I1222 01:53:24.065549 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.065558 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:24.065564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:24.065631 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:24.096630 2149446 cri.go:96] found id: ""
	I1222 01:53:24.096660 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.096670 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:24.096677 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:24.096744 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:24.122428 2149446 cri.go:96] found id: ""
	I1222 01:53:24.122453 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.122462 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:24.122468 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:24.122533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:24.147612 2149446 cri.go:96] found id: ""
	I1222 01:53:24.147634 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.147642 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:24.147649 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:24.147713 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:24.176841 2149446 cri.go:96] found id: ""
	I1222 01:53:24.176913 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.176933 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:24.176954 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:24.177042 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:24.206047 2149446 cri.go:96] found id: ""
	I1222 01:53:24.206073 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.206082 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:24.206089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:24.206148 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:24.232264 2149446 cri.go:96] found id: ""
	I1222 01:53:24.232292 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.232301 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:24.232307 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:24.232376 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:24.258596 2149446 cri.go:96] found id: ""
	I1222 01:53:24.258624 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.258633 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:24.258642 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:24.258655 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:24.286162 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:24.286196 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:24.314664 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:24.314690 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:24.384317 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:24.384405 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:24.404914 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:24.404992 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:24.478461 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:24.470772   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.471164   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472646   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472964   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.474221   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:24.470772   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.471164   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472646   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472964   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.474221   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:26.978694 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:26.989065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:26.989132 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:27.017546 2149446 cri.go:96] found id: ""
	I1222 01:53:27.017577 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.017586 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:27.017593 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:27.017662 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:27.047826 2149446 cri.go:96] found id: ""
	I1222 01:53:27.047855 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.047863 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:27.047871 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:27.047931 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:27.073912 2149446 cri.go:96] found id: ""
	I1222 01:53:27.073940 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.073952 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:27.073958 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:27.074018 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:27.098951 2149446 cri.go:96] found id: ""
	I1222 01:53:27.098974 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.098982 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:27.098988 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:27.099047 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:27.124715 2149446 cri.go:96] found id: ""
	I1222 01:53:27.124739 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.124748 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:27.124754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:27.124817 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:27.150398 2149446 cri.go:96] found id: ""
	I1222 01:53:27.150420 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.150431 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:27.150437 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:27.150502 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:27.176315 2149446 cri.go:96] found id: ""
	I1222 01:53:27.176347 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.176356 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:27.176386 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:27.176493 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:27.203872 2149446 cri.go:96] found id: ""
	I1222 01:53:27.203953 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.203977 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:27.204000 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:27.204034 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:27.237039 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:27.237067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:27.301398 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:27.301432 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:27.317630 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:27.317657 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:27.393864 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:27.383878   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.385765   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.387565   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.388163   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.389778   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:27.383878   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.385765   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.387565   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.388163   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.389778   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:27.393934 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:27.393960 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:29.927573 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:29.938417 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:29.938524 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:29.964036 2149446 cri.go:96] found id: ""
	I1222 01:53:29.964065 2149446 logs.go:282] 0 containers: []
	W1222 01:53:29.964073 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:29.964080 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:29.964138 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:29.988604 2149446 cri.go:96] found id: ""
	I1222 01:53:29.988632 2149446 logs.go:282] 0 containers: []
	W1222 01:53:29.988641 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:29.988647 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:29.988708 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:30.026459 2149446 cri.go:96] found id: ""
	I1222 01:53:30.026487 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.026497 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:30.026504 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:30.026573 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:30.057912 2149446 cri.go:96] found id: ""
	I1222 01:53:30.057939 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.057949 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:30.057955 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:30.058030 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:30.083599 2149446 cri.go:96] found id: ""
	I1222 01:53:30.083668 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.083690 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:30.083709 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:30.083806 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:30.110551 2149446 cri.go:96] found id: ""
	I1222 01:53:30.110620 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.110641 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:30.110655 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:30.110735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:30.137174 2149446 cri.go:96] found id: ""
	I1222 01:53:30.137203 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.137211 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:30.137218 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:30.137277 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:30.162716 2149446 cri.go:96] found id: ""
	I1222 01:53:30.162753 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.162763 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:30.162772 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:30.162791 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:30.227759 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:30.227794 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:30.245443 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:30.245482 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:30.307769 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:30.299415   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.299980   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301457   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301852   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.303320   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:30.299415   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.299980   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301457   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301852   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.303320   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:30.307792 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:30.307803 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:30.335176 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:30.335210 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:32.871091 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:32.881783 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:32.881907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:32.916036 2149446 cri.go:96] found id: ""
	I1222 01:53:32.916106 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.916128 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:32.916147 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:32.916230 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:32.943595 2149446 cri.go:96] found id: ""
	I1222 01:53:32.943619 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.943627 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:32.943634 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:32.943692 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:32.968836 2149446 cri.go:96] found id: ""
	I1222 01:53:32.968860 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.968868 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:32.968874 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:32.968956 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:32.997882 2149446 cri.go:96] found id: ""
	I1222 01:53:32.997950 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.997973 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:32.997985 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:32.998068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:33.026110 2149446 cri.go:96] found id: ""
	I1222 01:53:33.026137 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.026145 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:33.026152 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:33.026229 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:33.055281 2149446 cri.go:96] found id: ""
	I1222 01:53:33.055306 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.055315 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:33.055321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:33.055398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:33.086387 2149446 cri.go:96] found id: ""
	I1222 01:53:33.086465 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.086486 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:33.086499 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:33.086576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:33.112687 2149446 cri.go:96] found id: ""
	I1222 01:53:33.112730 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.112740 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:33.112749 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:33.112760 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:33.181024 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:33.181067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:33.198158 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:33.198190 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:33.273247 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:33.265089   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.265940   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.267575   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.268051   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.269666   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:33.265089   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.265940   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.267575   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.268051   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.269666   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:33.273341 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:33.273378 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:33.301159 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:33.301190 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:35.836720 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:35.847980 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:35.848054 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:35.879849 2149446 cri.go:96] found id: ""
	I1222 01:53:35.879879 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.879888 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:35.879894 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:35.879956 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:35.915400 2149446 cri.go:96] found id: ""
	I1222 01:53:35.915426 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.915434 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:35.915441 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:35.915511 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:35.940054 2149446 cri.go:96] found id: ""
	I1222 01:53:35.940078 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.940086 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:35.940093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:35.940153 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:35.970602 2149446 cri.go:96] found id: ""
	I1222 01:53:35.970625 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.970634 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:35.970641 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:35.970699 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:36.000035 2149446 cri.go:96] found id: ""
	I1222 01:53:36.000060 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.000069 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:36.000076 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:36.000146 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:36.030030 2149446 cri.go:96] found id: ""
	I1222 01:53:36.030062 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.030072 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:36.030085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:36.030160 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:36.056509 2149446 cri.go:96] found id: ""
	I1222 01:53:36.056535 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.056547 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:36.056554 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:36.056615 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:36.083227 2149446 cri.go:96] found id: ""
	I1222 01:53:36.083254 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.083262 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:36.083271 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:36.083282 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:36.112082 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:36.112120 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:36.144537 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:36.144568 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:36.209908 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:36.209989 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:36.228268 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:36.228349 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:36.290580 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:36.282527   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.283074   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284509   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284836   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.286252   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:36.282527   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.283074   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284509   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284836   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.286252   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:38.791376 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:38.803195 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:38.803268 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:38.827957 2149446 cri.go:96] found id: ""
	I1222 01:53:38.827983 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.827991 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:38.827997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:38.828059 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:38.858193 2149446 cri.go:96] found id: ""
	I1222 01:53:38.858219 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.858232 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:38.858238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:38.858300 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:38.892136 2149446 cri.go:96] found id: ""
	I1222 01:53:38.892168 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.892177 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:38.892184 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:38.892262 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:38.921915 2149446 cri.go:96] found id: ""
	I1222 01:53:38.921984 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.921999 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:38.922008 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:38.922067 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:38.947824 2149446 cri.go:96] found id: ""
	I1222 01:53:38.947847 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.947855 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:38.947862 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:38.947921 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:38.972933 2149446 cri.go:96] found id: ""
	I1222 01:53:38.972959 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.972967 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:38.972974 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:38.973039 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:39.007055 2149446 cri.go:96] found id: ""
	I1222 01:53:39.007083 2149446 logs.go:282] 0 containers: []
	W1222 01:53:39.007092 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:39.007099 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:39.007162 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:39.033180 2149446 cri.go:96] found id: ""
	I1222 01:53:39.033210 2149446 logs.go:282] 0 containers: []
	W1222 01:53:39.033219 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:39.033228 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:39.033240 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:39.098988 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:39.099022 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:39.117169 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:39.117195 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:39.179213 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:39.171486   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.172049   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.173620   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.174041   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.175480   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:39.171486   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.172049   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.173620   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.174041   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.175480   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:39.179238 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:39.179250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:39.207330 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:39.207363 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:41.736154 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:41.748704 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:41.748779 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:41.774614 2149446 cri.go:96] found id: ""
	I1222 01:53:41.774640 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.774658 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:41.774668 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:41.774726 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:41.804623 2149446 cri.go:96] found id: ""
	I1222 01:53:41.804649 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.804658 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:41.804665 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:41.804723 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:41.833829 2149446 cri.go:96] found id: ""
	I1222 01:53:41.833856 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.833865 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:41.833872 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:41.833936 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:41.875465 2149446 cri.go:96] found id: ""
	I1222 01:53:41.875492 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.875500 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:41.875507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:41.875571 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:41.905442 2149446 cri.go:96] found id: ""
	I1222 01:53:41.905467 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.905476 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:41.905482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:41.905570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:41.932195 2149446 cri.go:96] found id: ""
	I1222 01:53:41.932222 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.932231 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:41.932238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:41.932323 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:41.958757 2149446 cri.go:96] found id: ""
	I1222 01:53:41.958781 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.958795 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:41.958801 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:41.958884 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:41.984634 2149446 cri.go:96] found id: ""
	I1222 01:53:41.984658 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.984666 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:41.984683 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:41.984711 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:42.012813 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:42.012851 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:42.047278 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:42.047308 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:42.115243 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:42.115288 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:42.152682 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:42.152891 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:42.276937 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:42.267520   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.268654   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.269846   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.270701   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.271864   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:42.267520   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.268654   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.269846   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.270701   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.271864   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:44.778660 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:44.788968 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:44.789038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:44.814474 2149446 cri.go:96] found id: ""
	I1222 01:53:44.814498 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.814507 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:44.814514 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:44.814572 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:44.838957 2149446 cri.go:96] found id: ""
	I1222 01:53:44.838981 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.838990 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:44.838997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:44.839055 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:44.871015 2149446 cri.go:96] found id: ""
	I1222 01:53:44.871040 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.871048 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:44.871055 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:44.871126 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:44.901830 2149446 cri.go:96] found id: ""
	I1222 01:53:44.901852 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.901860 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:44.901866 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:44.901926 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:44.932837 2149446 cri.go:96] found id: ""
	I1222 01:53:44.932912 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.932936 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:44.932954 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:44.933034 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:44.958646 2149446 cri.go:96] found id: ""
	I1222 01:53:44.958670 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.958679 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:44.958685 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:44.958745 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:44.990447 2149446 cri.go:96] found id: ""
	I1222 01:53:44.990470 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.990479 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:44.990485 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:44.990551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:45.030189 2149446 cri.go:96] found id: ""
	I1222 01:53:45.030216 2149446 logs.go:282] 0 containers: []
	W1222 01:53:45.030226 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:45.030236 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:45.030250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:45.105196 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:45.105243 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:45.127602 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:45.127641 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:45.224662 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:45.212653   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.213852   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.215129   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.216095   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.218056   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:45.212653   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.213852   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.215129   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.216095   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.218056   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:45.224690 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:45.224706 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:45.294896 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:45.294933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:47.831558 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:47.843230 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:47.843313 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:47.888291 2149446 cri.go:96] found id: ""
	I1222 01:53:47.888314 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.888323 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:47.888329 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:47.888398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:47.921941 2149446 cri.go:96] found id: ""
	I1222 01:53:47.921965 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.921973 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:47.921979 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:47.922038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:47.948316 2149446 cri.go:96] found id: ""
	I1222 01:53:47.948349 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.948358 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:47.948373 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:47.948449 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:47.975804 2149446 cri.go:96] found id: ""
	I1222 01:53:47.975882 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.975898 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:47.975905 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:47.975969 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:48.017770 2149446 cri.go:96] found id: ""
	I1222 01:53:48.017798 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.017807 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:48.017814 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:48.017876 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:48.045819 2149446 cri.go:96] found id: ""
	I1222 01:53:48.045844 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.045853 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:48.045862 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:48.045924 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:48.072420 2149446 cri.go:96] found id: ""
	I1222 01:53:48.072446 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.072454 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:48.072467 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:48.072530 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:48.098359 2149446 cri.go:96] found id: ""
	I1222 01:53:48.098384 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.098392 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:48.098401 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:48.098443 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:48.115484 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:48.115515 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:48.182899 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:48.174594   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.175246   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.176834   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.177351   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.178997   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:48.174594   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.175246   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.176834   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.177351   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.178997   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:48.182970 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:48.182991 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:48.211848 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:48.211883 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:48.240534 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:48.240647 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:50.807734 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:50.819902 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:50.819972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:50.856953 2149446 cri.go:96] found id: ""
	I1222 01:53:50.856980 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.856989 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:50.856995 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:50.857057 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:50.893636 2149446 cri.go:96] found id: ""
	I1222 01:53:50.893665 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.893675 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:50.893681 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:50.893742 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:50.923962 2149446 cri.go:96] found id: ""
	I1222 01:53:50.923991 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.924000 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:50.924006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:50.924066 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:50.949904 2149446 cri.go:96] found id: ""
	I1222 01:53:50.949931 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.949940 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:50.949946 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:50.950007 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:50.979592 2149446 cri.go:96] found id: ""
	I1222 01:53:50.979619 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.979627 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:50.979633 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:50.979696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:51.006796 2149446 cri.go:96] found id: ""
	I1222 01:53:51.006823 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.006831 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:51.006838 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:51.006907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:51.036723 2149446 cri.go:96] found id: ""
	I1222 01:53:51.036750 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.036758 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:51.036765 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:51.036826 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:51.066606 2149446 cri.go:96] found id: ""
	I1222 01:53:51.066630 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.066639 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:51.066648 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:51.066661 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:51.132268 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:51.132306 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:51.150033 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:51.150064 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:51.221006 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:51.211594   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.212356   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.214275   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215136   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215987   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:51.211594   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.212356   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.214275   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215136   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215987   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:51.221029 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:51.221042 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:51.249405 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:51.249437 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:53.781794 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:53.792491 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:53.792562 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:53.821565 2149446 cri.go:96] found id: ""
	I1222 01:53:53.821592 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.821601 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:53.821607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:53.821664 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:53.858004 2149446 cri.go:96] found id: ""
	I1222 01:53:53.858026 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.858035 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:53.858042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:53.858102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:53.884816 2149446 cri.go:96] found id: ""
	I1222 01:53:53.884840 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.884848 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:53.884855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:53.884917 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:53.910690 2149446 cri.go:96] found id: ""
	I1222 01:53:53.910763 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.910785 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:53.910806 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:53.910896 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:53.938355 2149446 cri.go:96] found id: ""
	I1222 01:53:53.938382 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.938392 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:53.938398 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:53.938484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:53.968188 2149446 cri.go:96] found id: ""
	I1222 01:53:53.968220 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.968229 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:53.968236 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:53.968344 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:53.997318 2149446 cri.go:96] found id: ""
	I1222 01:53:53.997373 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.997382 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:53.997388 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:53.997462 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:54.025412 2149446 cri.go:96] found id: ""
	I1222 01:53:54.025514 2149446 logs.go:282] 0 containers: []
	W1222 01:53:54.025538 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:54.025561 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:54.025608 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:54.093935 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:54.085512   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.086255   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.087980   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.088420   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.090003   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:54.085512   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.086255   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.087980   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.088420   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.090003   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:54.093957 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:54.093969 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:54.123758 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:54.123795 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:54.153879 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:54.153908 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:54.220347 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:54.220396 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:56.737648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:56.748600 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:56.748674 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:56.774130 2149446 cri.go:96] found id: ""
	I1222 01:53:56.774156 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.774164 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:56.774171 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:56.774232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:56.800427 2149446 cri.go:96] found id: ""
	I1222 01:53:56.800458 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.800466 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:56.800472 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:56.800533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:56.825296 2149446 cri.go:96] found id: ""
	I1222 01:53:56.825341 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.825352 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:56.825359 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:56.825417 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:56.859965 2149446 cri.go:96] found id: ""
	I1222 01:53:56.859992 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.860000 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:56.860007 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:56.860070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:56.893738 2149446 cri.go:96] found id: ""
	I1222 01:53:56.893764 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.893773 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:56.893779 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:56.893840 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:56.922780 2149446 cri.go:96] found id: ""
	I1222 01:53:56.922805 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.922814 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:56.922820 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:56.922882 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:56.951835 2149446 cri.go:96] found id: ""
	I1222 01:53:56.951865 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.951874 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:56.951881 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:56.951942 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:56.982753 2149446 cri.go:96] found id: ""
	I1222 01:53:56.982788 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.982802 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:56.982832 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:56.982859 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:57.054734 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:57.054771 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:57.071366 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:57.071447 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:57.139693 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:57.131885   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.132329   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134013   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134495   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.135751   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:57.131885   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.132329   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134013   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134495   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.135751   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:57.139715 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:57.139728 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:57.168869 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:57.168907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:59.701462 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:59.712252 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:59.712324 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:59.737831 2149446 cri.go:96] found id: ""
	I1222 01:53:59.737856 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.737865 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:59.737872 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:59.737935 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:59.765066 2149446 cri.go:96] found id: ""
	I1222 01:53:59.765090 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.765099 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:59.765105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:59.765165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:59.789855 2149446 cri.go:96] found id: ""
	I1222 01:53:59.789878 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.789887 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:59.789893 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:59.789965 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:59.815579 2149446 cri.go:96] found id: ""
	I1222 01:53:59.815601 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.815609 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:59.815616 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:59.815676 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:59.840029 2149446 cri.go:96] found id: ""
	I1222 01:53:59.840051 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.840059 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:59.840066 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:59.840127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:59.870757 2149446 cri.go:96] found id: ""
	I1222 01:53:59.870790 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.870804 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:59.870815 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:59.870883 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:59.899916 2149446 cri.go:96] found id: ""
	I1222 01:53:59.899941 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.899954 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:59.899962 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:59.900029 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:59.927188 2149446 cri.go:96] found id: ""
	I1222 01:53:59.927221 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.927230 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:59.927239 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:59.927250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:59.957210 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:59.957247 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:59.988485 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:59.988515 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:00.082944 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:00.082993 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:00.133491 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:00.133527 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:00.297208 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:00.279334   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.280543   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.281714   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.282703   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.290162   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:00.279334   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.280543   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.281714   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.282703   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.290162   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:02.797491 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:02.810137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:02.810229 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:02.840273 2149446 cri.go:96] found id: ""
	I1222 01:54:02.840297 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.840305 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:02.840312 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:02.840420 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:02.875715 2149446 cri.go:96] found id: ""
	I1222 01:54:02.875739 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.875747 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:02.875754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:02.875813 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:02.919822 2149446 cri.go:96] found id: ""
	I1222 01:54:02.919846 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.919854 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:02.919860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:02.919919 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:02.948124 2149446 cri.go:96] found id: ""
	I1222 01:54:02.948146 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.948154 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:02.948160 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:02.948219 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:02.974340 2149446 cri.go:96] found id: ""
	I1222 01:54:02.974363 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.974372 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:02.974378 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:02.974438 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:02.999271 2149446 cri.go:96] found id: ""
	I1222 01:54:02.999296 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.999305 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:02.999312 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:02.999385 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:03.030970 2149446 cri.go:96] found id: ""
	I1222 01:54:03.030993 2149446 logs.go:282] 0 containers: []
	W1222 01:54:03.031002 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:03.031008 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:03.031068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:03.056002 2149446 cri.go:96] found id: ""
	I1222 01:54:03.056029 2149446 logs.go:282] 0 containers: []
	W1222 01:54:03.056038 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:03.056047 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:03.056058 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:03.125375 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:03.125411 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:03.145686 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:03.145716 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:03.218509 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:03.209113   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.210343   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.211031   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.212871   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.213440   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:03.209113   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.210343   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.211031   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.212871   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.213440   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:03.218530 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:03.218542 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:03.248642 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:03.248677 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:05.780702 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:05.791068 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:05.791146 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:05.816364 2149446 cri.go:96] found id: ""
	I1222 01:54:05.816438 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.816460 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:05.816477 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:05.816563 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:05.841188 2149446 cri.go:96] found id: ""
	I1222 01:54:05.841211 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.841220 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:05.841226 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:05.841288 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:05.881850 2149446 cri.go:96] found id: ""
	I1222 01:54:05.881872 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.881881 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:05.881887 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:05.881948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:05.916551 2149446 cri.go:96] found id: ""
	I1222 01:54:05.916573 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.916582 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:05.916589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:05.916651 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:05.943199 2149446 cri.go:96] found id: ""
	I1222 01:54:05.943226 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.943234 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:05.943240 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:05.943305 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:05.970571 2149446 cri.go:96] found id: ""
	I1222 01:54:05.970596 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.970605 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:05.970611 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:05.970671 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:05.995460 2149446 cri.go:96] found id: ""
	I1222 01:54:05.995526 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.995547 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:05.995567 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:05.995644 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:06.023405 2149446 cri.go:96] found id: ""
	I1222 01:54:06.023433 2149446 logs.go:282] 0 containers: []
	W1222 01:54:06.023442 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:06.023451 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:06.023462 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:06.052389 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:06.052429 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:06.081830 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:06.081857 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:06.148270 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:06.148310 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:06.164614 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:06.164642 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:06.229843 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:06.221234   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.222106   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.223684   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.224107   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.225591   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:06.221234   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.222106   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.223684   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.224107   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.225591   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:08.730857 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:08.741318 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:08.741413 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:08.767143 2149446 cri.go:96] found id: ""
	I1222 01:54:08.767169 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.767182 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:08.767190 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:08.767248 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:08.791625 2149446 cri.go:96] found id: ""
	I1222 01:54:08.791658 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.791667 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:08.791673 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:08.791739 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:08.816602 2149446 cri.go:96] found id: ""
	I1222 01:54:08.816636 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.816645 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:08.816652 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:08.816718 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:08.844185 2149446 cri.go:96] found id: ""
	I1222 01:54:08.844261 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.844283 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:08.844302 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:08.844404 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:08.883312 2149446 cri.go:96] found id: ""
	I1222 01:54:08.883384 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.883408 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:08.883428 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:08.883517 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:08.911421 2149446 cri.go:96] found id: ""
	I1222 01:54:08.911457 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.911466 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:08.911473 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:08.911538 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:08.942750 2149446 cri.go:96] found id: ""
	I1222 01:54:08.942775 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.942784 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:08.942791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:08.942848 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:08.967789 2149446 cri.go:96] found id: ""
	I1222 01:54:08.967813 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.967822 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:08.967831 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:08.967842 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:09.033639 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:09.033675 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:09.052146 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:09.052175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:09.116330 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:09.108758   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.109130   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110675   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110988   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.112489   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:09.108758   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.109130   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110675   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110988   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.112489   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:09.116356 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:09.116368 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:09.144973 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:09.145004 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:11.673475 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:11.684547 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:11.684626 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:11.710519 2149446 cri.go:96] found id: ""
	I1222 01:54:11.710548 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.710557 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:11.710564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:11.710626 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:11.735548 2149446 cri.go:96] found id: ""
	I1222 01:54:11.735572 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.735582 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:11.735589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:11.735654 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:11.760929 2149446 cri.go:96] found id: ""
	I1222 01:54:11.760954 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.760963 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:11.760969 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:11.761033 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:11.787093 2149446 cri.go:96] found id: ""
	I1222 01:54:11.787120 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.787128 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:11.787135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:11.787193 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:11.812911 2149446 cri.go:96] found id: ""
	I1222 01:54:11.812936 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.812945 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:11.812951 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:11.813015 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:11.838482 2149446 cri.go:96] found id: ""
	I1222 01:54:11.838505 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.838514 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:11.838520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:11.838583 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:11.866259 2149446 cri.go:96] found id: ""
	I1222 01:54:11.866286 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.866306 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:11.866313 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:11.866384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:11.893985 2149446 cri.go:96] found id: ""
	I1222 01:54:11.894008 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.894016 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:11.894025 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:11.894037 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:11.913582 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:11.913611 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:11.981873 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:11.974736   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.975183   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.976706   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.977106   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.978227   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:11.974736   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.975183   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.976706   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.977106   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.978227   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:11.981895 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:11.981907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:12.010106 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:12.010148 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:12.040439 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:12.040469 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:14.613993 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:14.626494 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:14.626566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:14.659356 2149446 cri.go:96] found id: ""
	I1222 01:54:14.659378 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.659386 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:14.659392 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:14.659453 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:14.686289 2149446 cri.go:96] found id: ""
	I1222 01:54:14.686311 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.686319 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:14.686326 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:14.686382 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:14.711394 2149446 cri.go:96] found id: ""
	I1222 01:54:14.711422 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.711432 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:14.711438 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:14.711497 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:14.739481 2149446 cri.go:96] found id: ""
	I1222 01:54:14.739551 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.739573 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:14.739587 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:14.739660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:14.768068 2149446 cri.go:96] found id: ""
	I1222 01:54:14.768090 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.768098 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:14.768105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:14.768164 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:14.794022 2149446 cri.go:96] found id: ""
	I1222 01:54:14.794045 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.794054 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:14.794061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:14.794122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:14.826016 2149446 cri.go:96] found id: ""
	I1222 01:54:14.826083 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.826098 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:14.826105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:14.826165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:14.862200 2149446 cri.go:96] found id: ""
	I1222 01:54:14.862227 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.862238 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:14.862247 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:14.862266 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:14.936812 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:14.936849 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:14.955319 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:14.955350 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:15.031542 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:15.022418   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.023307   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025087   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025579   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.027225   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:15.022418   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.023307   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025087   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025579   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.027225   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:15.031567 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:15.031581 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:15.060447 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:15.060486 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:17.589430 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:17.599883 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:17.600008 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:17.629312 2149446 cri.go:96] found id: ""
	I1222 01:54:17.629354 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.629363 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:17.629370 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:17.629430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:17.666013 2149446 cri.go:96] found id: ""
	I1222 01:54:17.666039 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.666048 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:17.666054 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:17.666114 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:17.690505 2149446 cri.go:96] found id: ""
	I1222 01:54:17.690528 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.690536 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:17.690543 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:17.690603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:17.716599 2149446 cri.go:96] found id: ""
	I1222 01:54:17.716623 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.716631 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:17.716647 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:17.716706 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:17.741489 2149446 cri.go:96] found id: ""
	I1222 01:54:17.741513 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.741522 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:17.741528 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:17.741587 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:17.770157 2149446 cri.go:96] found id: ""
	I1222 01:54:17.770183 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.770192 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:17.770198 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:17.770286 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:17.794804 2149446 cri.go:96] found id: ""
	I1222 01:54:17.794828 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.794843 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:17.794850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:17.794912 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:17.820386 2149446 cri.go:96] found id: ""
	I1222 01:54:17.820411 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.820421 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:17.820429 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:17.820441 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:17.886892 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:17.886927 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:17.906329 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:17.906358 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:17.968971 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:17.961422   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.961912   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963366   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963698   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.965104   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:17.961422   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.961912   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963366   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963698   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.965104   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:17.968994 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:17.969007 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:17.997165 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:17.997196 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:20.530538 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:20.541178 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:20.541247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:20.567334 2149446 cri.go:96] found id: ""
	I1222 01:54:20.567358 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.567367 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:20.567373 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:20.567433 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:20.594495 2149446 cri.go:96] found id: ""
	I1222 01:54:20.594519 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.594528 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:20.594534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:20.594592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:20.627810 2149446 cri.go:96] found id: ""
	I1222 01:54:20.627837 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.627846 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:20.627853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:20.627913 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:20.659100 2149446 cri.go:96] found id: ""
	I1222 01:54:20.659127 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.659136 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:20.659142 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:20.659203 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:20.684059 2149446 cri.go:96] found id: ""
	I1222 01:54:20.684083 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.684092 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:20.684099 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:20.684158 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:20.708264 2149446 cri.go:96] found id: ""
	I1222 01:54:20.708289 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.708298 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:20.708305 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:20.708373 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:20.732615 2149446 cri.go:96] found id: ""
	I1222 01:54:20.732640 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.732649 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:20.732656 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:20.732718 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:20.764171 2149446 cri.go:96] found id: ""
	I1222 01:54:20.764198 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.764207 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:20.764215 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:20.764228 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:20.780713 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:20.780742 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:20.851938 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:20.843189   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.843837   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.845556   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.846092   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.847691   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:20.843189   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.843837   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.845556   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.846092   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.847691   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:20.851961 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:20.851974 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:20.893661 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:20.893699 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:20.931494 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:20.931521 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:23.498941 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:23.509907 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:23.509983 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:23.539967 2149446 cri.go:96] found id: ""
	I1222 01:54:23.539997 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.540005 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:23.540012 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:23.540071 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:23.569856 2149446 cri.go:96] found id: ""
	I1222 01:54:23.569884 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.569894 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:23.569900 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:23.569961 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:23.604014 2149446 cri.go:96] found id: ""
	I1222 01:54:23.604082 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.604103 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:23.604124 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:23.604217 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:23.632481 2149446 cri.go:96] found id: ""
	I1222 01:54:23.632510 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.632519 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:23.632525 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:23.632586 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:23.663814 2149446 cri.go:96] found id: ""
	I1222 01:54:23.663838 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.663847 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:23.663853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:23.663916 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:23.693790 2149446 cri.go:96] found id: ""
	I1222 01:54:23.693815 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.693824 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:23.693831 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:23.693891 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:23.722928 2149446 cri.go:96] found id: ""
	I1222 01:54:23.722995 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.723009 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:23.723017 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:23.723077 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:23.749108 2149446 cri.go:96] found id: ""
	I1222 01:54:23.749133 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.749142 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:23.749151 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:23.749162 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:23.814472 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:23.814506 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:23.831927 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:23.831957 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:23.920909 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:23.912937   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.913679   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915294   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915601   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.917055   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:23.912937   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.913679   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915294   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915601   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.917055   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:23.920931 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:23.920943 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:23.949424 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:23.949460 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:26.476761 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:26.487732 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:26.487807 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:26.517071 2149446 cri.go:96] found id: ""
	I1222 01:54:26.517147 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.517162 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:26.517170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:26.517245 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:26.542078 2149446 cri.go:96] found id: ""
	I1222 01:54:26.542101 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.542110 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:26.542116 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:26.542177 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:26.567743 2149446 cri.go:96] found id: ""
	I1222 01:54:26.567767 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.567775 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:26.567782 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:26.567843 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:26.595280 2149446 cri.go:96] found id: ""
	I1222 01:54:26.595307 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.595316 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:26.595322 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:26.595382 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:26.624206 2149446 cri.go:96] found id: ""
	I1222 01:54:26.624232 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.624241 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:26.624248 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:26.624308 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:26.658482 2149446 cri.go:96] found id: ""
	I1222 01:54:26.658508 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.658516 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:26.658523 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:26.658581 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:26.686129 2149446 cri.go:96] found id: ""
	I1222 01:54:26.686152 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.686161 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:26.686168 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:26.686249 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:26.711709 2149446 cri.go:96] found id: ""
	I1222 01:54:26.711736 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.711745 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:26.711754 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:26.711764 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:26.778620 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:26.778656 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:26.795417 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:26.795446 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:26.867101 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:26.857306   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.858065   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.859736   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.860499   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.862034   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:26.857306   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.858065   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.859736   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.860499   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.862034   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:26.867125 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:26.867141 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:26.902503 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:26.902544 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:29.434518 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:29.445170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:29.445239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:29.470088 2149446 cri.go:96] found id: ""
	I1222 01:54:29.470111 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.470120 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:29.470126 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:29.470190 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:29.494531 2149446 cri.go:96] found id: ""
	I1222 01:54:29.494553 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.494562 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:29.494568 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:29.494627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:29.519108 2149446 cri.go:96] found id: ""
	I1222 01:54:29.519131 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.519139 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:29.519146 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:29.519212 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:29.544138 2149446 cri.go:96] found id: ""
	I1222 01:54:29.544162 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.544170 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:29.544176 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:29.544237 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:29.569957 2149446 cri.go:96] found id: ""
	I1222 01:54:29.569982 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.569991 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:29.569999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:29.570068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:29.601725 2149446 cri.go:96] found id: ""
	I1222 01:54:29.601748 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.601756 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:29.601763 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:29.601826 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:29.631880 2149446 cri.go:96] found id: ""
	I1222 01:54:29.631904 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.631912 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:29.631918 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:29.631981 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:29.664642 2149446 cri.go:96] found id: ""
	I1222 01:54:29.664669 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.664678 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:29.664686 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:29.664698 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:29.732517 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:29.732561 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:29.751836 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:29.751868 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:29.825936 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:29.817178   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.817918   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.819621   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.820393   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.822050   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:29.817178   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.817918   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.819621   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.820393   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.822050   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:29.825958 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:29.825971 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:29.860213 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:29.860248 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:32.399119 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:32.410321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:32.410395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:32.436535 2149446 cri.go:96] found id: ""
	I1222 01:54:32.436567 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.436576 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:32.436589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:32.436667 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:32.462497 2149446 cri.go:96] found id: ""
	I1222 01:54:32.462522 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.462531 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:32.462538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:32.462598 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:32.488532 2149446 cri.go:96] found id: ""
	I1222 01:54:32.488607 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.488631 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:32.488651 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:32.488727 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:32.513915 2149446 cri.go:96] found id: ""
	I1222 01:54:32.513939 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.513956 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:32.513980 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:32.514061 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:32.541025 2149446 cri.go:96] found id: ""
	I1222 01:54:32.541050 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.541060 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:32.541067 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:32.541130 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:32.568517 2149446 cri.go:96] found id: ""
	I1222 01:54:32.568544 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.568553 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:32.568561 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:32.568633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:32.596417 2149446 cri.go:96] found id: ""
	I1222 01:54:32.596444 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.596453 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:32.596459 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:32.596522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:32.629445 2149446 cri.go:96] found id: ""
	I1222 01:54:32.629473 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.629482 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:32.629491 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:32.629504 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:32.700379 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:32.700415 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:32.717228 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:32.717259 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:32.783507 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:32.774480   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.775245   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777042   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.779225   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:32.774480   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.775245   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777042   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.779225   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:32.783529 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:32.783541 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:32.812359 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:32.812415 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:35.344384 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:35.357066 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:35.357137 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:35.387791 2149446 cri.go:96] found id: ""
	I1222 01:54:35.387821 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.387832 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:35.387839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:35.387907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:35.413270 2149446 cri.go:96] found id: ""
	I1222 01:54:35.413294 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.413303 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:35.413309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:35.413385 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:35.438755 2149446 cri.go:96] found id: ""
	I1222 01:54:35.438783 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.438791 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:35.438798 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:35.438859 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:35.464735 2149446 cri.go:96] found id: ""
	I1222 01:54:35.464761 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.464771 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:35.464778 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:35.464838 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:35.490615 2149446 cri.go:96] found id: ""
	I1222 01:54:35.490639 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.490647 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:35.490654 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:35.490722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:35.520602 2149446 cri.go:96] found id: ""
	I1222 01:54:35.520628 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.520637 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:35.520644 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:35.520706 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:35.545887 2149446 cri.go:96] found id: ""
	I1222 01:54:35.545911 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.545919 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:35.545928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:35.545989 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:35.571922 2149446 cri.go:96] found id: ""
	I1222 01:54:35.571945 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.571953 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:35.571962 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:35.571973 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:35.641377 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:35.641458 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:35.658922 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:35.659080 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:35.721523 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:35.714066   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.714428   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.715873   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.716191   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.717743   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:35.714066   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.714428   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.715873   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.716191   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.717743   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:35.721545 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:35.721560 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:35.749350 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:35.749380 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:38.281427 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:38.293044 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:38.293127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:38.321491 2149446 cri.go:96] found id: ""
	I1222 01:54:38.321519 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.321528 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:38.321534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:38.321593 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:38.346163 2149446 cri.go:96] found id: ""
	I1222 01:54:38.346238 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.346261 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:38.346279 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:38.346367 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:38.372025 2149446 cri.go:96] found id: ""
	I1222 01:54:38.372050 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.372058 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:38.372065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:38.372133 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:38.399739 2149446 cri.go:96] found id: ""
	I1222 01:54:38.399762 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.399770 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:38.399780 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:38.399844 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:38.424497 2149446 cri.go:96] found id: ""
	I1222 01:54:38.424565 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.424589 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:38.424608 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:38.424696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:38.449671 2149446 cri.go:96] found id: ""
	I1222 01:54:38.449699 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.449708 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:38.449714 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:38.449776 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:38.477607 2149446 cri.go:96] found id: ""
	I1222 01:54:38.477674 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.477697 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:38.477710 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:38.477785 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:38.506176 2149446 cri.go:96] found id: ""
	I1222 01:54:38.506204 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.506212 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:38.506220 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:38.506231 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:38.533790 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:38.533822 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:38.561674 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:38.561700 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:38.628458 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:38.628502 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:38.649997 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:38.650027 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:38.712931 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:38.704109   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.704905   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.706476   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.707004   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.708610   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:38.704109   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.704905   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.706476   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.707004   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.708610   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:41.213505 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:41.225530 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:41.225616 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:41.255991 2149446 cri.go:96] found id: ""
	I1222 01:54:41.256021 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.256029 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:41.256036 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:41.256094 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:41.282289 2149446 cri.go:96] found id: ""
	I1222 01:54:41.282314 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.282333 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:41.282339 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:41.282419 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:41.308460 2149446 cri.go:96] found id: ""
	I1222 01:54:41.308488 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.308499 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:41.308505 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:41.308566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:41.333516 2149446 cri.go:96] found id: ""
	I1222 01:54:41.333547 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.333555 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:41.333562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:41.333620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:41.359576 2149446 cri.go:96] found id: ""
	I1222 01:54:41.359598 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.359607 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:41.359614 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:41.359672 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:41.384042 2149446 cri.go:96] found id: ""
	I1222 01:54:41.384066 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.384075 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:41.384082 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:41.384142 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:41.413519 2149446 cri.go:96] found id: ""
	I1222 01:54:41.413549 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.413559 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:41.413565 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:41.413624 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:41.454139 2149446 cri.go:96] found id: ""
	I1222 01:54:41.454162 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.454171 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:41.454180 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:41.454191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:41.482322 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:41.482355 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:41.511950 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:41.511979 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:41.577845 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:41.577882 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:41.595747 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:41.595826 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:41.676481 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:41.667994   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.668497   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670197   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670551   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.672124   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:41.667994   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.668497   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670197   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670551   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.672124   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:44.178129 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:44.189537 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:44.189613 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:44.217535 2149446 cri.go:96] found id: ""
	I1222 01:54:44.217563 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.217572 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:44.217580 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:44.217646 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:44.243271 2149446 cri.go:96] found id: ""
	I1222 01:54:44.243299 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.243308 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:44.243315 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:44.243379 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:44.272096 2149446 cri.go:96] found id: ""
	I1222 01:54:44.272120 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.272129 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:44.272135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:44.272194 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:44.298955 2149446 cri.go:96] found id: ""
	I1222 01:54:44.298983 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.298993 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:44.298999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:44.299070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:44.324703 2149446 cri.go:96] found id: ""
	I1222 01:54:44.324728 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.324737 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:44.324743 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:44.324834 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:44.350284 2149446 cri.go:96] found id: ""
	I1222 01:54:44.350307 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.350316 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:44.350322 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:44.350384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:44.378349 2149446 cri.go:96] found id: ""
	I1222 01:54:44.378375 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.378384 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:44.378390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:44.378453 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:44.408220 2149446 cri.go:96] found id: ""
	I1222 01:54:44.408244 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.408258 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:44.408271 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:44.408283 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:44.474431 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:44.474465 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:44.491072 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:44.491099 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:44.556034 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:44.547925   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.548318   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.549960   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.550258   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.551701   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:44.547925   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.548318   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.549960   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.550258   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.551701   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:44.556055 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:44.556068 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:44.584394 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:44.584431 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:47.124887 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:47.137092 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:47.137169 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:47.164240 2149446 cri.go:96] found id: ""
	I1222 01:54:47.164268 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.164277 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:47.164284 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:47.164343 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:47.189698 2149446 cri.go:96] found id: ""
	I1222 01:54:47.189723 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.189732 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:47.189739 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:47.189799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:47.215248 2149446 cri.go:96] found id: ""
	I1222 01:54:47.215276 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.215285 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:47.215291 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:47.215350 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:47.241376 2149446 cri.go:96] found id: ""
	I1222 01:54:47.241401 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.241409 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:47.241416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:47.241475 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:47.266358 2149446 cri.go:96] found id: ""
	I1222 01:54:47.266383 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.266392 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:47.266400 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:47.266477 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:47.292142 2149446 cri.go:96] found id: ""
	I1222 01:54:47.292167 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.292175 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:47.292182 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:47.292260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:47.318478 2149446 cri.go:96] found id: ""
	I1222 01:54:47.318503 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.318511 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:47.318520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:47.318581 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:47.343233 2149446 cri.go:96] found id: ""
	I1222 01:54:47.343313 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.343328 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:47.343338 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:47.343351 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:47.360045 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:47.360074 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:47.426418 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:47.417312   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.418166   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.419870   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.420511   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.422229   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:47.417312   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.418166   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.419870   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.420511   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.422229   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:47.426442 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:47.426457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:47.455495 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:47.455529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:47.485526 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:47.485555 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:50.052484 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:50.066978 2149446 out.go:203] 
	W1222 01:54:50.069810 2149446 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1222 01:54:50.069848 2149446 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1222 01:54:50.069857 2149446 out.go:285] * Related issues:
	W1222 01:54:50.069872 2149446 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1222 01:54:50.069887 2149446 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1222 01:54:50.072764 2149446 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.44871456Z" level=info msg="Using the internal default seccomp profile"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448722183Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448729674Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448735557Z" level=info msg="RDT not available in the host system"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448747987Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449934856Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449963803Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449983872Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450671237Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450694973Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450859498Z" level=info msg="Updated default CNI network name to "
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.451450093Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.451791835Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.45184593Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.49846563Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498673066Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498765978Z" level=info msg="Create NRI interface"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498873749Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498887845Z" level=info msg="runtime interface created"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498899816Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498906454Z" level=info msg="runtime interface starting up..."
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498912689Z" level=info msg="starting plugins..."
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498927811Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.49901347Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:48:46 newest-cni-704217 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:53.192096   13484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:53.192551   13484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:53.195615   13484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:53.196211   13484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:53.197766   13484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:54:53 up 1 day,  8:37,  0 user,  load average: 0.56, 0.69, 1.19
	Linux newest-cni-704217 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:54:50 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:54:51 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 22 01:54:51 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:51 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:51 newest-cni-704217 kubelet[13367]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:51 newest-cni-704217 kubelet[13367]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:51 newest-cni-704217 kubelet[13367]: E1222 01:54:51.427542   13367 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:54:51 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:54:51 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:52 newest-cni-704217 kubelet[13387]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:52 newest-cni-704217 kubelet[13387]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:52 newest-cni-704217 kubelet[13387]: E1222 01:54:52.157132   13387 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:52 newest-cni-704217 kubelet[13413]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:52 newest-cni-704217 kubelet[13413]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:52 newest-cni-704217 kubelet[13413]: E1222 01:54:52.903616   13413 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:54:52 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (350.473087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-704217" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (374.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:50:47.970634 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:51:25.758946 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:52:01.691892 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:52:03.285143 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:52:43.835594 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:53:26.328137 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:55:47.970625 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:56:08.820718 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:56:25.759823 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:57:01.692294 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:57:03.285061 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:57:11.015627 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 01:57:43.835376 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 2 (397.225764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-491054
helpers_test.go:244: (dbg) docker inspect no-preload-491054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	        "Created": "2025-12-22T01:32:55.262645623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2142290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:43:30.530329674Z",
	            "FinishedAt": "2025-12-22T01:43:29.176921862Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hostname",
	        "HostsPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hosts",
	        "LogPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe-json.log",
	        "Name": "/no-preload-491054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	                "LowerDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-491054",
	                "Source": "/var/lib/docker/volumes/no-preload-491054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491054",
	                "name.minikube.sigs.k8s.io": "no-preload-491054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f5325c5c7084a0f9b6fab754824a17e882a8a3c300adc09d5d8d9446a681793",
	            "SandboxKey": "/var/run/docker/netns/4f5325c5c708",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38785"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38783"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:66:ff:3e:a4:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19fc401d8662d0cf8941c7bbec5156d3018f76638db0a4459b76764a03803463",
	                    "EndpointID": "44bb6686738bfe98627c492c33328f969aae00e65930a103715f4085574a78d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491054",
	                        "313e04d6776e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054: exit status 2 (427.586715ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491054 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-491054 logs -n 25: (1.051692647s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-678854 sudo crictl ps --all                                          │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;   │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo ip a s                                                   │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo ip r s                                                   │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo iptables-save                                            │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo iptables -t nat -L -n -v                                 │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo systemctl status kubelet --all --full --no-pager         │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo systemctl cat kubelet --no-pager                         │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo journalctl -xeu kubelet --all --full --no-pager          │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo cat /etc/kubernetes/kubelet.conf                         │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo cat /var/lib/kubelet/config.yaml                         │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo systemctl status docker --all --full --no-pager          │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │                     │
	│ ssh     │ -p kindnet-678854 sudo systemctl cat docker --no-pager                          │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo cat /etc/docker/daemon.json                              │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │                     │
	│ ssh     │ -p kindnet-678854 sudo docker system info                                       │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │                     │
	│ ssh     │ -p kindnet-678854 sudo systemctl status cri-docker --all --full --no-pager      │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │                     │
	│ ssh     │ -p kindnet-678854 sudo systemctl cat cri-docker --no-pager                      │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │                     │
	│ ssh     │ -p kindnet-678854 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo cri-dockerd --version                                    │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo systemctl status containerd --all --full --no-pager      │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │                     │
	│ ssh     │ -p kindnet-678854 sudo systemctl cat containerd --no-pager                      │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo cat /lib/systemd/system/containerd.service               │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo cat /etc/containerd/config.toml                          │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │ 22 Dec 25 01:58 UTC │
	│ ssh     │ -p kindnet-678854 sudo containerd config dump                                   │ kindnet-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 01:58 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:56:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:56:42.838644 2174704 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:56:42.838780 2174704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:56:42.838796 2174704 out.go:374] Setting ErrFile to fd 2...
	I1222 01:56:42.838801 2174704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:56:42.839036 2174704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:56:42.839433 2174704 out.go:368] Setting JSON to false
	I1222 01:56:42.840245 2174704 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":117552,"bootTime":1766251051,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:56:42.840312 2174704 start.go:143] virtualization:  
	I1222 01:56:42.844018 2174704 out.go:179] * [kindnet-678854] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:56:42.848615 2174704 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:56:42.848770 2174704 notify.go:221] Checking for updates...
	I1222 01:56:42.855185 2174704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:56:42.858425 2174704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:56:42.861485 2174704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:56:42.864452 2174704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:56:42.867449 2174704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:56:42.870977 2174704 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:56:42.871115 2174704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:56:42.898856 2174704 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:56:42.899003 2174704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:56:42.959115 2174704 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:56:42.949790197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:56:42.959223 2174704 docker.go:319] overlay module found
	I1222 01:56:42.962387 2174704 out.go:179] * Using the docker driver based on user configuration
	I1222 01:56:42.965449 2174704 start.go:309] selected driver: docker
	I1222 01:56:42.965467 2174704 start.go:928] validating driver "docker" against <nil>
	I1222 01:56:42.965481 2174704 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:56:42.966202 2174704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:56:43.023610 2174704 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:56:43.014574479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:56:43.023765 2174704 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:56:43.023984 2174704 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:56:43.026965 2174704 out.go:179] * Using Docker driver with root privileges
	I1222 01:56:43.029793 2174704 cni.go:84] Creating CNI manager for "kindnet"
	I1222 01:56:43.029819 2174704 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:56:43.029902 2174704 start.go:353] cluster config:
	{Name:kindnet-678854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-678854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:56:43.033062 2174704 out.go:179] * Starting "kindnet-678854" primary control-plane node in "kindnet-678854" cluster
	I1222 01:56:43.035862 2174704 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:56:43.038616 2174704 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:56:43.041452 2174704 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:56:43.041497 2174704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 01:56:43.041509 2174704 cache.go:65] Caching tarball of preloaded images
	I1222 01:56:43.041544 2174704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:56:43.041603 2174704 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:56:43.041614 2174704 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 01:56:43.041713 2174704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/config.json ...
	I1222 01:56:43.041735 2174704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/config.json: {Name:mk9d96c882e0bd575d7247ff009a244dbdf885d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:56:43.060851 2174704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:56:43.060875 2174704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:56:43.060896 2174704 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:56:43.060930 2174704 start.go:360] acquireMachinesLock for kindnet-678854: {Name:mk8ba30dad01d7cccef465a94d2653e8bb12b031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:56:43.061043 2174704 start.go:364] duration metric: took 91.238µs to acquireMachinesLock for "kindnet-678854"
	I1222 01:56:43.061074 2174704 start.go:93] Provisioning new machine with config: &{Name:kindnet-678854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-678854 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:56:43.061142 2174704 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:56:43.064619 2174704 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:56:43.064831 2174704 start.go:159] libmachine.API.Create for "kindnet-678854" (driver="docker")
	I1222 01:56:43.064875 2174704 client.go:173] LocalClient.Create starting
	I1222 01:56:43.064966 2174704 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 01:56:43.065014 2174704 main.go:144] libmachine: Decoding PEM data...
	I1222 01:56:43.065034 2174704 main.go:144] libmachine: Parsing certificate...
	I1222 01:56:43.065092 2174704 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 01:56:43.065114 2174704 main.go:144] libmachine: Decoding PEM data...
	I1222 01:56:43.065128 2174704 main.go:144] libmachine: Parsing certificate...
	I1222 01:56:43.065519 2174704 cli_runner.go:164] Run: docker network inspect kindnet-678854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:56:43.081501 2174704 cli_runner.go:211] docker network inspect kindnet-678854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:56:43.081585 2174704 network_create.go:284] running [docker network inspect kindnet-678854] to gather additional debugging logs...
	I1222 01:56:43.081605 2174704 cli_runner.go:164] Run: docker network inspect kindnet-678854
	W1222 01:56:43.105485 2174704 cli_runner.go:211] docker network inspect kindnet-678854 returned with exit code 1
	I1222 01:56:43.105533 2174704 network_create.go:287] error running [docker network inspect kindnet-678854]: docker network inspect kindnet-678854: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-678854 not found
	I1222 01:56:43.105552 2174704 network_create.go:289] output of [docker network inspect kindnet-678854]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-678854 not found
	
	** /stderr **
	I1222 01:56:43.105648 2174704 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:56:43.124929 2174704 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 01:56:43.125264 2174704 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 01:56:43.125635 2174704 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 01:56:43.125893 2174704 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 01:56:43.126310 2174704 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a15030}
	I1222 01:56:43.126328 2174704 network_create.go:124] attempt to create docker network kindnet-678854 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:56:43.126381 2174704 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-678854 kindnet-678854
	I1222 01:56:43.195587 2174704 network_create.go:108] docker network kindnet-678854 192.168.85.0/24 created
	I1222 01:56:43.195615 2174704 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-678854" container
	I1222 01:56:43.195701 2174704 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:56:43.211609 2174704 cli_runner.go:164] Run: docker volume create kindnet-678854 --label name.minikube.sigs.k8s.io=kindnet-678854 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:56:43.229509 2174704 oci.go:103] Successfully created a docker volume kindnet-678854
	I1222 01:56:43.229584 2174704 cli_runner.go:164] Run: docker run --rm --name kindnet-678854-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-678854 --entrypoint /usr/bin/test -v kindnet-678854:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:56:43.739625 2174704 oci.go:107] Successfully prepared a docker volume kindnet-678854
	I1222 01:56:43.739697 2174704 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:56:43.739712 2174704 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:56:43.739776 2174704 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-678854:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:56:50.312731 2174704 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-678854:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (6.572916167s)
	I1222 01:56:50.312762 2174704 kic.go:203] duration metric: took 6.573046534s to extract preloaded images to volume ...
	W1222 01:56:50.312909 2174704 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:56:50.313019 2174704 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:56:50.364709 2174704 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-678854 --name kindnet-678854 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-678854 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-678854 --network kindnet-678854 --ip 192.168.85.2 --volume kindnet-678854:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:56:50.661595 2174704 cli_runner.go:164] Run: docker container inspect kindnet-678854 --format={{.State.Running}}
	I1222 01:56:50.682278 2174704 cli_runner.go:164] Run: docker container inspect kindnet-678854 --format={{.State.Status}}
	I1222 01:56:50.700743 2174704 cli_runner.go:164] Run: docker exec kindnet-678854 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:56:50.760041 2174704 oci.go:144] the created container "kindnet-678854" has a running status.
	I1222 01:56:50.760071 2174704 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa...
	I1222 01:56:50.857978 2174704 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:56:50.883802 2174704 cli_runner.go:164] Run: docker container inspect kindnet-678854 --format={{.State.Status}}
	I1222 01:56:50.903783 2174704 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:56:50.903801 2174704 kic_runner.go:114] Args: [docker exec --privileged kindnet-678854 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:56:50.951950 2174704 cli_runner.go:164] Run: docker container inspect kindnet-678854 --format={{.State.Status}}
	I1222 01:56:50.979186 2174704 machine.go:94] provisionDockerMachine start ...
	I1222 01:56:50.979505 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:51.014409 2174704 main.go:144] libmachine: Using SSH client type: native
	I1222 01:56:51.014749 2174704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38796 <nil> <nil>}
	I1222 01:56:51.014765 2174704 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:56:51.017502 2174704 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60502->127.0.0.1:38796: read: connection reset by peer
	I1222 01:56:54.153036 2174704 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-678854
	
	I1222 01:56:54.153064 2174704 ubuntu.go:182] provisioning hostname "kindnet-678854"
	I1222 01:56:54.153134 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:54.170760 2174704 main.go:144] libmachine: Using SSH client type: native
	I1222 01:56:54.171075 2174704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38796 <nil> <nil>}
	I1222 01:56:54.171091 2174704 main.go:144] libmachine: About to run SSH command:
	sudo hostname kindnet-678854 && echo "kindnet-678854" | sudo tee /etc/hostname
	I1222 01:56:54.315262 2174704 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-678854
	
	I1222 01:56:54.315360 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:54.334091 2174704 main.go:144] libmachine: Using SSH client type: native
	I1222 01:56:54.334419 2174704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38796 <nil> <nil>}
	I1222 01:56:54.334449 2174704 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-678854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-678854/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-678854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:56:54.489544 2174704 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:56:54.489587 2174704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:56:54.489616 2174704 ubuntu.go:190] setting up certificates
	I1222 01:56:54.489632 2174704 provision.go:84] configureAuth start
	I1222 01:56:54.489707 2174704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-678854
	I1222 01:56:54.506547 2174704 provision.go:143] copyHostCerts
	I1222 01:56:54.506617 2174704 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:56:54.506626 2174704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:56:54.506713 2174704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:56:54.506813 2174704 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:56:54.506823 2174704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:56:54.506851 2174704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:56:54.506911 2174704 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:56:54.506920 2174704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:56:54.506944 2174704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:56:54.507005 2174704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.kindnet-678854 san=[127.0.0.1 192.168.85.2 kindnet-678854 localhost minikube]
	I1222 01:56:54.701862 2174704 provision.go:177] copyRemoteCerts
	I1222 01:56:54.701933 2174704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:56:54.701975 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:54.721690 2174704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38796 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa Username:docker}
	I1222 01:56:54.820937 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:56:54.838215 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1222 01:56:54.856742 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:56:54.874420 2174704 provision.go:87] duration metric: took 384.756617ms to configureAuth
	I1222 01:56:54.874452 2174704 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:56:54.874646 2174704 config.go:182] Loaded profile config "kindnet-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:56:54.874755 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:54.892375 2174704 main.go:144] libmachine: Using SSH client type: native
	I1222 01:56:54.892692 2174704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38796 <nil> <nil>}
	I1222 01:56:54.892705 2174704 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:56:55.300725 2174704 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:56:55.300754 2174704 machine.go:97] duration metric: took 4.321546828s to provisionDockerMachine
	I1222 01:56:55.300764 2174704 client.go:176] duration metric: took 12.235878862s to LocalClient.Create
	I1222 01:56:55.300776 2174704 start.go:167] duration metric: took 12.235945321s to libmachine.API.Create "kindnet-678854"
	I1222 01:56:55.300783 2174704 start.go:293] postStartSetup for "kindnet-678854" (driver="docker")
	I1222 01:56:55.300793 2174704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:56:55.300866 2174704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:56:55.300907 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:55.318086 2174704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38796 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa Username:docker}
	I1222 01:56:55.413166 2174704 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:56:55.416473 2174704 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:56:55.416508 2174704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:56:55.416520 2174704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:56:55.416571 2174704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:56:55.417239 2174704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:56:55.417372 2174704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:56:55.424424 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:56:55.441149 2174704 start.go:296] duration metric: took 140.350698ms for postStartSetup
	I1222 01:56:55.441581 2174704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-678854
	I1222 01:56:55.458064 2174704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/config.json ...
	I1222 01:56:55.458342 2174704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:56:55.458391 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:55.474992 2174704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38796 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa Username:docker}
	I1222 01:56:55.567160 2174704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:56:55.571656 2174704 start.go:128] duration metric: took 12.51049793s to createHost
	I1222 01:56:55.571680 2174704 start.go:83] releasing machines lock for "kindnet-678854", held for 12.510623776s
	I1222 01:56:55.571752 2174704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-678854
	I1222 01:56:55.588455 2174704 ssh_runner.go:195] Run: cat /version.json
	I1222 01:56:55.588514 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:55.588782 2174704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:56:55.588845 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:56:55.608488 2174704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38796 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa Username:docker}
	I1222 01:56:55.609088 2174704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38796 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa Username:docker}
	I1222 01:56:55.832151 2174704 ssh_runner.go:195] Run: systemctl --version
	I1222 01:56:55.838705 2174704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:56:55.897968 2174704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:56:55.903532 2174704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:56:55.903630 2174704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:56:55.935935 2174704 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:56:55.935965 2174704 start.go:496] detecting cgroup driver to use...
	I1222 01:56:55.936021 2174704 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:56:55.936101 2174704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:56:55.952947 2174704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:56:55.965749 2174704 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:56:55.965822 2174704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:56:55.983537 2174704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:56:56.003172 2174704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:56:56.123691 2174704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:56:56.243664 2174704 docker.go:234] disabling docker service ...
	I1222 01:56:56.243737 2174704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:56:56.269360 2174704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:56:56.283532 2174704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:56:56.410317 2174704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:56:56.550132 2174704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:56:56.564609 2174704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:56:56.578822 2174704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:56:56.578908 2174704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:56:56.588228 2174704 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:56:56.588348 2174704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:56:56.600518 2174704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:56:56.609241 2174704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:56:56.618950 2174704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:56:56.627975 2174704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:56:56.636216 2174704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:56:56.649193 2174704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:56:56.658704 2174704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:56:56.668403 2174704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:56:56.675860 2174704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:56:56.790745 2174704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:56:56.951645 2174704 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:56:56.951769 2174704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:56:56.955647 2174704 start.go:564] Will wait 60s for crictl version
	I1222 01:56:56.955734 2174704 ssh_runner.go:195] Run: which crictl
	I1222 01:56:56.959245 2174704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:56:56.986561 2174704 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:56:56.986671 2174704 ssh_runner.go:195] Run: crio --version
	I1222 01:56:57.024863 2174704 ssh_runner.go:195] Run: crio --version
	I1222 01:56:57.055507 2174704 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 01:56:57.058187 2174704 cli_runner.go:164] Run: docker network inspect kindnet-678854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:56:57.074057 2174704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:56:57.077777 2174704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:56:57.087265 2174704 kubeadm.go:884] updating cluster {Name:kindnet-678854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-678854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:56:57.087393 2174704 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 01:56:57.087455 2174704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:56:57.124156 2174704 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:56:57.124179 2174704 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:56:57.124239 2174704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:56:57.151370 2174704 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:56:57.151394 2174704 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:56:57.151402 2174704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1222 01:56:57.151493 2174704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-678854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-678854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1222 01:56:57.151575 2174704 ssh_runner.go:195] Run: crio config
	I1222 01:56:57.216432 2174704 cni.go:84] Creating CNI manager for "kindnet"
	I1222 01:56:57.216465 2174704 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:56:57.217447 2174704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-678854 NodeName:kindnet-678854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:56:57.217579 2174704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-678854"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:56:57.217660 2174704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:56:57.225463 2174704 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:56:57.225606 2174704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:56:57.233375 2174704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1222 01:56:57.246121 2174704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:56:57.259173 2174704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1222 01:56:57.271921 2174704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:56:57.275608 2174704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:56:57.285148 2174704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:56:57.414761 2174704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:56:57.431142 2174704 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854 for IP: 192.168.85.2
	I1222 01:56:57.431221 2174704 certs.go:195] generating shared ca certs ...
	I1222 01:56:57.431251 2174704 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:56:57.431421 2174704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:56:57.431502 2174704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:56:57.431537 2174704 certs.go:257] generating profile certs ...
	I1222 01:56:57.431642 2174704 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.key
	I1222 01:56:57.431674 2174704 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt with IP's: []
	I1222 01:56:57.653202 2174704 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt ...
	I1222 01:56:57.653237 2174704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: {Name:mke9a804c538ac54330bdade7bd9ca12d2bd5049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:56:57.653457 2174704 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.key ...
	I1222 01:56:57.653474 2174704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.key: {Name:mk645ce0c92d3a293851c6afdbdec8dc1ec2219d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:56:57.653580 2174704 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.key.b3c18f7a
	I1222 01:56:57.653598 2174704 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.crt.b3c18f7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:56:57.835888 2174704 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.crt.b3c18f7a ...
	I1222 01:56:57.835920 2174704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.crt.b3c18f7a: {Name:mk09ce7fb5916d8dc976b92d0890c5e9c4e6137b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:56:57.836105 2174704 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.key.b3c18f7a ...
	I1222 01:56:57.836121 2174704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.key.b3c18f7a: {Name:mkdc0335a843357d297570c529871d4112e1a734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:56:57.836210 2174704 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.crt.b3c18f7a -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.crt
	I1222 01:56:57.836295 2174704 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.key.b3c18f7a -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.key
	I1222 01:56:57.836370 2174704 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/proxy-client.key
	I1222 01:56:57.836388 2174704 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/proxy-client.crt with IP's: []
	I1222 01:56:57.912214 2174704 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/proxy-client.crt ...
	I1222 01:56:57.912245 2174704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/proxy-client.crt: {Name:mk7d25eda93a14b260661b51dcb9d2b3fceccf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:56:57.912448 2174704 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/proxy-client.key ...
	I1222 01:56:57.912462 2174704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/proxy-client.key: {Name:mkc57940d609d0f95a3b29369f0d56ddecac99b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:56:57.912651 2174704 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:56:57.912695 2174704 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:56:57.912709 2174704 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:56:57.912735 2174704 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:56:57.912763 2174704 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:56:57.912791 2174704 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:56:57.912843 2174704 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:56:57.913436 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:56:57.931703 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:56:57.950125 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:56:57.968138 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:56:57.986852 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1222 01:56:58.007400 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:56:58.027006 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:56:58.046017 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:56:58.064643 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:56:58.083444 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:56:58.132509 2174704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:56:58.174190 2174704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:56:58.186549 2174704 ssh_runner.go:195] Run: openssl version
	I1222 01:56:58.195239 2174704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:56:58.202534 2174704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:56:58.210502 2174704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:56:58.214796 2174704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:56:58.214890 2174704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:56:58.257539 2174704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:56:58.265105 2174704 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:56:58.272275 2174704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:56:58.279445 2174704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:56:58.286981 2174704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:56:58.290803 2174704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:56:58.290911 2174704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:56:58.332096 2174704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:56:58.339806 2174704 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 01:56:58.347038 2174704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:56:58.354336 2174704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:56:58.361873 2174704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:56:58.365709 2174704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:56:58.365776 2174704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:56:58.406707 2174704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:56:58.414329 2174704 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:56:58.421813 2174704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:56:58.425380 2174704 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:56:58.425494 2174704 kubeadm.go:401] StartCluster: {Name:kindnet-678854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-678854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:56:58.425587 2174704 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:56:58.425653 2174704 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:56:58.450975 2174704 cri.go:96] found id: ""
	I1222 01:56:58.451099 2174704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:56:58.458778 2174704 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:56:58.466371 2174704 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:56:58.466467 2174704 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:56:58.474153 2174704 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:56:58.474176 2174704 kubeadm.go:158] found existing configuration files:
	
	I1222 01:56:58.474254 2174704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:56:58.481457 2174704 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:56:58.481570 2174704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:56:58.488946 2174704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:56:58.496519 2174704 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:56:58.496611 2174704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:56:58.503872 2174704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:56:58.511768 2174704 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:56:58.511839 2174704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:56:58.518937 2174704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:56:58.526392 2174704 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:56:58.526455 2174704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:56:58.533635 2174704 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:56:58.572797 2174704 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 01:56:58.572861 2174704 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:56:58.598941 2174704 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:56:58.599016 2174704 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:56:58.599060 2174704 kubeadm.go:319] OS: Linux
	I1222 01:56:58.599109 2174704 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:56:58.599160 2174704 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:56:58.599209 2174704 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:56:58.599260 2174704 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:56:58.599310 2174704 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:56:58.599361 2174704 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:56:58.599411 2174704 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:56:58.599462 2174704 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:56:58.599511 2174704 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:56:58.662905 2174704 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:56:58.663091 2174704 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:56:58.663232 2174704 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:56:58.673687 2174704 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:56:58.679599 2174704 out.go:252]   - Generating certificates and keys ...
	I1222 01:56:58.679691 2174704 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:56:58.679765 2174704 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:56:58.778785 2174704 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:56:59.104990 2174704 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:56:59.409972 2174704 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:56:59.942284 2174704 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:57:01.012290 2174704 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:57:01.012561 2174704 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-678854 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:57:01.624788 2174704 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:57:01.624939 2174704 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-678854 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:57:02.090584 2174704 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:57:02.407500 2174704 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:57:02.666017 2174704 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:57:02.666575 2174704 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:57:03.306239 2174704 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:57:04.009791 2174704 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:57:04.587123 2174704 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:57:05.921719 2174704 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:57:06.294866 2174704 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:57:06.295828 2174704 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:57:06.298933 2174704 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:57:06.302622 2174704 out.go:252]   - Booting up control plane ...
	I1222 01:57:06.302724 2174704 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:57:06.302802 2174704 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:57:06.304081 2174704 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:57:06.320245 2174704 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:57:06.320622 2174704 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:57:06.328424 2174704 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:57:06.328709 2174704 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:57:06.328760 2174704 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:57:06.491342 2174704 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:57:06.491466 2174704 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:57:07.992916 2174704 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501662421s
	I1222 01:57:07.996482 2174704 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 01:57:07.996581 2174704 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1222 01:57:07.996752 2174704 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 01:57:07.996846 2174704 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1222 01:57:13.547365 2174704 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.550277552s
	I1222 01:57:14.539945 2174704 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.543447651s
	I1222 01:57:15.499063 2174704 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.50236427s
	I1222 01:57:15.533703 2174704 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 01:57:15.549689 2174704 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 01:57:15.563027 2174704 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 01:57:15.563235 2174704 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-678854 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 01:57:15.573803 2174704 kubeadm.go:319] [bootstrap-token] Using token: lfxypv.4b7io8zu89371464
	I1222 01:57:15.576763 2174704 out.go:252]   - Configuring RBAC rules ...
	I1222 01:57:15.576897 2174704 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 01:57:15.581437 2174704 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 01:57:15.591368 2174704 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 01:57:15.595298 2174704 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 01:57:15.599005 2174704 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 01:57:15.603327 2174704 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 01:57:15.906116 2174704 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 01:57:16.354642 2174704 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 01:57:16.905849 2174704 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 01:57:16.907027 2174704 kubeadm.go:319] 
	I1222 01:57:16.907101 2174704 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 01:57:16.907111 2174704 kubeadm.go:319] 
	I1222 01:57:16.907189 2174704 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 01:57:16.907198 2174704 kubeadm.go:319] 
	I1222 01:57:16.907223 2174704 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 01:57:16.907287 2174704 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 01:57:16.907341 2174704 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 01:57:16.907348 2174704 kubeadm.go:319] 
	I1222 01:57:16.907402 2174704 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 01:57:16.907409 2174704 kubeadm.go:319] 
	I1222 01:57:16.907457 2174704 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 01:57:16.907464 2174704 kubeadm.go:319] 
	I1222 01:57:16.907516 2174704 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 01:57:16.907601 2174704 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 01:57:16.907672 2174704 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 01:57:16.907679 2174704 kubeadm.go:319] 
	I1222 01:57:16.907763 2174704 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 01:57:16.907843 2174704 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 01:57:16.907851 2174704 kubeadm.go:319] 
	I1222 01:57:16.907934 2174704 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lfxypv.4b7io8zu89371464 \
	I1222 01:57:16.908039 2174704 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d \
	I1222 01:57:16.908065 2174704 kubeadm.go:319] 	--control-plane 
	I1222 01:57:16.908072 2174704 kubeadm.go:319] 
	I1222 01:57:16.908157 2174704 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 01:57:16.908164 2174704 kubeadm.go:319] 
	I1222 01:57:16.908246 2174704 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lfxypv.4b7io8zu89371464 \
	I1222 01:57:16.908359 2174704 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d 
	I1222 01:57:16.912356 2174704 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 01:57:16.912581 2174704 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:57:16.912692 2174704 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:57:16.912711 2174704 cni.go:84] Creating CNI manager for "kindnet"
	I1222 01:57:16.917811 2174704 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1222 01:57:16.920801 2174704 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1222 01:57:16.925261 2174704 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1222 01:57:16.925290 2174704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1222 01:57:16.938707 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1222 01:57:17.221060 2174704 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 01:57:17.221203 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:17.221282 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-678854 minikube.k8s.io/updated_at=2025_12_22T01_57_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=kindnet-678854 minikube.k8s.io/primary=true
	I1222 01:57:17.238146 2174704 ops.go:34] apiserver oom_adj: -16
	I1222 01:57:17.361770 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:17.862681 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:18.361884 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:18.861859 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:19.361938 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:19.862567 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:20.362553 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:20.862347 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:21.361918 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:21.861841 2174704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:57:21.974398 2174704 kubeadm.go:1114] duration metric: took 4.753245119s to wait for elevateKubeSystemPrivileges
	I1222 01:57:21.974430 2174704 kubeadm.go:403] duration metric: took 23.548940303s to StartCluster
	I1222 01:57:21.974446 2174704 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:57:21.974510 2174704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:57:21.975438 2174704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:57:21.975657 2174704 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:57:21.975743 2174704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 01:57:21.975972 2174704 config.go:182] Loaded profile config "kindnet-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:57:21.976013 2174704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:57:21.976081 2174704 addons.go:70] Setting storage-provisioner=true in profile "kindnet-678854"
	I1222 01:57:21.976099 2174704 addons.go:239] Setting addon storage-provisioner=true in "kindnet-678854"
	I1222 01:57:21.976124 2174704 host.go:66] Checking if "kindnet-678854" exists ...
	I1222 01:57:21.976608 2174704 cli_runner.go:164] Run: docker container inspect kindnet-678854 --format={{.State.Status}}
	I1222 01:57:21.977051 2174704 addons.go:70] Setting default-storageclass=true in profile "kindnet-678854"
	I1222 01:57:21.977076 2174704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-678854"
	I1222 01:57:21.977443 2174704 cli_runner.go:164] Run: docker container inspect kindnet-678854 --format={{.State.Status}}
	I1222 01:57:21.981642 2174704 out.go:179] * Verifying Kubernetes components...
	I1222 01:57:21.991647 2174704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:57:22.019055 2174704 addons.go:239] Setting addon default-storageclass=true in "kindnet-678854"
	I1222 01:57:22.019102 2174704 host.go:66] Checking if "kindnet-678854" exists ...
	I1222 01:57:22.019520 2174704 cli_runner.go:164] Run: docker container inspect kindnet-678854 --format={{.State.Status}}
	I1222 01:57:22.020972 2174704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:57:22.025982 2174704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:57:22.026017 2174704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:57:22.026091 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:57:22.065532 2174704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:57:22.065556 2174704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:57:22.065631 2174704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-678854
	I1222 01:57:22.076253 2174704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38796 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa Username:docker}
	I1222 01:57:22.103234 2174704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38796 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/kindnet-678854/id_rsa Username:docker}
	I1222 01:57:22.403625 2174704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:57:22.462735 2174704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 01:57:22.462939 2174704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:57:22.516551 2174704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:57:23.142208 2174704 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1222 01:57:23.145547 2174704 node_ready.go:35] waiting up to 15m0s for node "kindnet-678854" to be "Ready" ...
	I1222 01:57:23.182211 2174704 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1222 01:57:23.185069 2174704 addons.go:530] duration metric: took 1.209047638s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1222 01:57:23.645775 2174704 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-678854" context rescaled to 1 replicas
	W1222 01:57:25.148933 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:27.648801 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:30.148538 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:32.648333 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:34.648672 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:36.649109 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:39.148194 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:41.148242 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:43.149033 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:45.152312 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:47.648924 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:49.649206 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:52.149624 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:54.648898 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:56.649035 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:57:59.149168 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:58:01.649185 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	W1222 01:58:04.149018 2174704 node_ready.go:57] node "kindnet-678854" has "Ready":"False" status (will retry)
	I1222 01:58:05.149505 2174704 node_ready.go:49] node "kindnet-678854" is "Ready"
	I1222 01:58:05.149538 2174704 node_ready.go:38] duration metric: took 42.003923837s for node "kindnet-678854" to be "Ready" ...
	I1222 01:58:05.149553 2174704 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:58:05.149627 2174704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:58:05.162276 2174704 api_server.go:72] duration metric: took 43.186583433s to wait for apiserver process to appear ...
	I1222 01:58:05.162304 2174704 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:58:05.162324 2174704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 01:58:05.171909 2174704 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1222 01:58:05.173054 2174704 api_server.go:141] control plane version: v1.34.3
	I1222 01:58:05.173084 2174704 api_server.go:131] duration metric: took 10.772765ms to wait for apiserver health ...
	I1222 01:58:05.173094 2174704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:58:05.180433 2174704 system_pods.go:59] 8 kube-system pods found
	I1222 01:58:05.180478 2174704 system_pods.go:61] "coredns-66bc5c9577-jznpf" [60e25b66-77ba-471e-bfb8-05605459d006] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:58:05.180486 2174704 system_pods.go:61] "etcd-kindnet-678854" [5257f46f-d34a-48ed-bc90-bd441ccc582c] Running
	I1222 01:58:05.180492 2174704 system_pods.go:61] "kindnet-mtcrk" [5cb780bc-a405-4341-8bba-506d5cfef00c] Running
	I1222 01:58:05.180496 2174704 system_pods.go:61] "kube-apiserver-kindnet-678854" [cda7cf29-855b-4a90-8db4-5fec985c8d8f] Running
	I1222 01:58:05.180500 2174704 system_pods.go:61] "kube-controller-manager-kindnet-678854" [389bf375-5253-44b0-83e7-d1c9efc5d3f5] Running
	I1222 01:58:05.180504 2174704 system_pods.go:61] "kube-proxy-kdb95" [44a5b8b1-84cd-445c-95b2-308c885e3655] Running
	I1222 01:58:05.180508 2174704 system_pods.go:61] "kube-scheduler-kindnet-678854" [548f3d20-9c63-4fad-a16f-af48888318b3] Running
	I1222 01:58:05.180515 2174704 system_pods.go:61] "storage-provisioner" [8d303c7a-397f-4278-b433-a0d489468050] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:58:05.180525 2174704 system_pods.go:74] duration metric: took 7.425837ms to wait for pod list to return data ...
	I1222 01:58:05.180541 2174704 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:58:05.197382 2174704 default_sa.go:45] found service account: "default"
	I1222 01:58:05.197409 2174704 default_sa.go:55] duration metric: took 16.862362ms for default service account to be created ...
	I1222 01:58:05.197420 2174704 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:58:05.200712 2174704 system_pods.go:86] 8 kube-system pods found
	I1222 01:58:05.200747 2174704 system_pods.go:89] "coredns-66bc5c9577-jznpf" [60e25b66-77ba-471e-bfb8-05605459d006] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:58:05.200754 2174704 system_pods.go:89] "etcd-kindnet-678854" [5257f46f-d34a-48ed-bc90-bd441ccc582c] Running
	I1222 01:58:05.200760 2174704 system_pods.go:89] "kindnet-mtcrk" [5cb780bc-a405-4341-8bba-506d5cfef00c] Running
	I1222 01:58:05.200765 2174704 system_pods.go:89] "kube-apiserver-kindnet-678854" [cda7cf29-855b-4a90-8db4-5fec985c8d8f] Running
	I1222 01:58:05.200769 2174704 system_pods.go:89] "kube-controller-manager-kindnet-678854" [389bf375-5253-44b0-83e7-d1c9efc5d3f5] Running
	I1222 01:58:05.200774 2174704 system_pods.go:89] "kube-proxy-kdb95" [44a5b8b1-84cd-445c-95b2-308c885e3655] Running
	I1222 01:58:05.200778 2174704 system_pods.go:89] "kube-scheduler-kindnet-678854" [548f3d20-9c63-4fad-a16f-af48888318b3] Running
	I1222 01:58:05.200784 2174704 system_pods.go:89] "storage-provisioner" [8d303c7a-397f-4278-b433-a0d489468050] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:58:05.200823 2174704 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1222 01:58:05.460493 2174704 system_pods.go:86] 8 kube-system pods found
	I1222 01:58:05.460583 2174704 system_pods.go:89] "coredns-66bc5c9577-jznpf" [60e25b66-77ba-471e-bfb8-05605459d006] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:58:05.460603 2174704 system_pods.go:89] "etcd-kindnet-678854" [5257f46f-d34a-48ed-bc90-bd441ccc582c] Running
	I1222 01:58:05.460638 2174704 system_pods.go:89] "kindnet-mtcrk" [5cb780bc-a405-4341-8bba-506d5cfef00c] Running
	I1222 01:58:05.460661 2174704 system_pods.go:89] "kube-apiserver-kindnet-678854" [cda7cf29-855b-4a90-8db4-5fec985c8d8f] Running
	I1222 01:58:05.460680 2174704 system_pods.go:89] "kube-controller-manager-kindnet-678854" [389bf375-5253-44b0-83e7-d1c9efc5d3f5] Running
	I1222 01:58:05.460705 2174704 system_pods.go:89] "kube-proxy-kdb95" [44a5b8b1-84cd-445c-95b2-308c885e3655] Running
	I1222 01:58:05.460733 2174704 system_pods.go:89] "kube-scheduler-kindnet-678854" [548f3d20-9c63-4fad-a16f-af48888318b3] Running
	I1222 01:58:05.460759 2174704 system_pods.go:89] "storage-provisioner" [8d303c7a-397f-4278-b433-a0d489468050] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:58:05.821743 2174704 system_pods.go:86] 8 kube-system pods found
	I1222 01:58:05.821778 2174704 system_pods.go:89] "coredns-66bc5c9577-jznpf" [60e25b66-77ba-471e-bfb8-05605459d006] Running
	I1222 01:58:05.821785 2174704 system_pods.go:89] "etcd-kindnet-678854" [5257f46f-d34a-48ed-bc90-bd441ccc582c] Running
	I1222 01:58:05.821790 2174704 system_pods.go:89] "kindnet-mtcrk" [5cb780bc-a405-4341-8bba-506d5cfef00c] Running
	I1222 01:58:05.821794 2174704 system_pods.go:89] "kube-apiserver-kindnet-678854" [cda7cf29-855b-4a90-8db4-5fec985c8d8f] Running
	I1222 01:58:05.821798 2174704 system_pods.go:89] "kube-controller-manager-kindnet-678854" [389bf375-5253-44b0-83e7-d1c9efc5d3f5] Running
	I1222 01:58:05.821804 2174704 system_pods.go:89] "kube-proxy-kdb95" [44a5b8b1-84cd-445c-95b2-308c885e3655] Running
	I1222 01:58:05.821808 2174704 system_pods.go:89] "kube-scheduler-kindnet-678854" [548f3d20-9c63-4fad-a16f-af48888318b3] Running
	I1222 01:58:05.821812 2174704 system_pods.go:89] "storage-provisioner" [8d303c7a-397f-4278-b433-a0d489468050] Running
	I1222 01:58:05.821820 2174704 system_pods.go:126] duration metric: took 624.394683ms to wait for k8s-apps to be running ...
	I1222 01:58:05.821832 2174704 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:58:05.821896 2174704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:58:05.835542 2174704 system_svc.go:56] duration metric: took 13.702712ms WaitForService to wait for kubelet
	I1222 01:58:05.835574 2174704 kubeadm.go:587] duration metric: took 43.859887009s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:58:05.835593 2174704 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:58:05.838534 2174704 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:58:05.838565 2174704 node_conditions.go:123] node cpu capacity is 2
	I1222 01:58:05.838579 2174704 node_conditions.go:105] duration metric: took 2.9814ms to run NodePressure ...
	I1222 01:58:05.838592 2174704 start.go:242] waiting for startup goroutines ...
	I1222 01:58:05.838600 2174704 start.go:247] waiting for cluster config update ...
	I1222 01:58:05.838611 2174704 start.go:256] writing updated cluster config ...
	I1222 01:58:05.838903 2174704 ssh_runner.go:195] Run: rm -f paused
	I1222 01:58:05.842650 2174704 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:58:05.846412 2174704 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jznpf" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:05.851015 2174704 pod_ready.go:94] pod "coredns-66bc5c9577-jznpf" is "Ready"
	I1222 01:58:05.851036 2174704 pod_ready.go:86] duration metric: took 4.596342ms for pod "coredns-66bc5c9577-jznpf" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:05.853269 2174704 pod_ready.go:83] waiting for pod "etcd-kindnet-678854" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:05.857451 2174704 pod_ready.go:94] pod "etcd-kindnet-678854" is "Ready"
	I1222 01:58:05.857472 2174704 pod_ready.go:86] duration metric: took 4.181807ms for pod "etcd-kindnet-678854" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:05.859516 2174704 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-678854" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:05.863971 2174704 pod_ready.go:94] pod "kube-apiserver-kindnet-678854" is "Ready"
	I1222 01:58:05.863998 2174704 pod_ready.go:86] duration metric: took 4.460043ms for pod "kube-apiserver-kindnet-678854" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:05.866425 2174704 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-678854" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:06.246466 2174704 pod_ready.go:94] pod "kube-controller-manager-kindnet-678854" is "Ready"
	I1222 01:58:06.246500 2174704 pod_ready.go:86] duration metric: took 380.049289ms for pod "kube-controller-manager-kindnet-678854" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:06.446692 2174704 pod_ready.go:83] waiting for pod "kube-proxy-kdb95" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:06.846771 2174704 pod_ready.go:94] pod "kube-proxy-kdb95" is "Ready"
	I1222 01:58:06.846808 2174704 pod_ready.go:86] duration metric: took 400.021424ms for pod "kube-proxy-kdb95" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:07.047104 2174704 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-678854" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:07.447754 2174704 pod_ready.go:94] pod "kube-scheduler-kindnet-678854" is "Ready"
	I1222 01:58:07.447791 2174704 pod_ready.go:86] duration metric: took 400.659542ms for pod "kube-scheduler-kindnet-678854" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:58:07.447804 2174704 pod_ready.go:40] duration metric: took 1.605124011s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:58:07.517885 2174704 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:58:07.521513 2174704 out.go:179] * Done! kubectl is now configured to use "kindnet-678854" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361841003Z" level=info msg="Using the internal default seccomp profile"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361848683Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.36185418Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361860416Z" level=info msg="RDT not available in the host system"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361873446Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362571854Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362593843Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362610154Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363331199Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363355166Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363489102Z" level=info msg="Updated default CNI network name to "
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364161575Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364588492Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364649191Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426273648Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426307715Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426348477Z" level=info msg="Create NRI interface"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426482331Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426498232Z" level=info msg="runtime interface created"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426511188Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426517128Z" level=info msg="runtime interface starting up..."
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426522773Z" level=info msg="starting plugins..."
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426536213Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426597914Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:43:36 no-preload-491054 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:58:41.978577    8204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:58:41.981083    8204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:58:41.982863    8204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:58:41.983170    8204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:58:41.984641    8204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	[Dec22 01:55] overlayfs: idmapped layers are currently not supported
	[Dec22 01:57] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:58:42 up 1 day,  8:41,  0 user,  load average: 1.54, 1.16, 1.27
	Linux no-preload-491054 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:58:39 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:58:40 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 22 01:58:40 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:58:40 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:58:40 no-preload-491054 kubelet[8074]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:58:40 no-preload-491054 kubelet[8074]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:58:40 no-preload-491054 kubelet[8074]: E1222 01:58:40.186865    8074 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:58:40 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:58:40 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:58:40 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 22 01:58:40 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:58:40 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:58:40 no-preload-491054 kubelet[8096]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:58:40 no-preload-491054 kubelet[8096]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:58:40 no-preload-491054 kubelet[8096]: E1222 01:58:40.940486    8096 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:58:40 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:58:40 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:58:41 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 22 01:58:41 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:58:41 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:58:41 no-preload-491054 kubelet[8149]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:58:41 no-preload-491054 kubelet[8149]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:58:41 no-preload-491054 kubelet[8149]: E1222 01:58:41.757149    8149 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:58:41 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:58:41 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 2 (466.995888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-704217 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (285.23101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-704217 -n newest-cni-704217
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (301.277907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-704217 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (318.708252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-704217 -n newest-cni-704217
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (300.144369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-704217
helpers_test.go:244: (dbg) docker inspect newest-cni-704217:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	        "Created": "2025-12-22T01:38:39.533406818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2149579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:48:40.537015605Z",
	            "FinishedAt": "2025-12-22T01:48:39.100562621Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hosts",
	        "LogPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e-json.log",
	        "Name": "/newest-cni-704217",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-704217:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-704217",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	                "LowerDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-704217",
	                "Source": "/var/lib/docker/volumes/newest-cni-704217/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-704217",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-704217",
	                "name.minikube.sigs.k8s.io": "newest-cni-704217",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14d50a92928bec4b8ab05dab0866e0253bd6e6099eb5f12680df80e7cd980f01",
	            "SandboxKey": "/var/run/docker/netns/14d50a92928b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38786"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38787"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38790"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38788"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38789"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-704217": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:54:eb:22:0d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37c5f2b876d48064d8ed7b8ba7f6e9a5833c911bae1041e13f11dbc7e56a4324",
	                    "EndpointID": "adf54697897a7d0e245c3181f4c38786594da1f48236d2a9134636be2465ea29",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-704217",
	                        "0d8468deabde"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (356.398734ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-704217 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-704217 logs -n 25: (1.525404052s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:41 UTC │                     │
	│ stop    │ -p no-preload-491054 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ addons  │ enable dashboard -p no-preload-491054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-704217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:46 UTC │                     │
	│ stop    │ -p newest-cni-704217 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │ 22 Dec 25 01:48 UTC │
	│ addons  │ enable dashboard -p newest-cni-704217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │ 22 Dec 25 01:48 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │                     │
	│ image   │ newest-cni-704217 image list --format=json                                                                                                                                                                                                         │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:54 UTC │ 22 Dec 25 01:54 UTC │
	│ pause   │ -p newest-cni-704217 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:54 UTC │ 22 Dec 25 01:54 UTC │
	│ unpause │ -p newest-cni-704217 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:54 UTC │ 22 Dec 25 01:54 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:48:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:48:40.242767 2149446 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:48:40.242975 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243012 2149446 out.go:374] Setting ErrFile to fd 2...
	I1222 01:48:40.243032 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243304 2149446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:48:40.243710 2149446 out.go:368] Setting JSON to false
	I1222 01:48:40.244718 2149446 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":117070,"bootTime":1766251051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:48:40.244822 2149446 start.go:143] virtualization:  
	I1222 01:48:40.247655 2149446 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1222 01:48:35.596802 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:37.597647 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:40.096931 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:40.251418 2149446 notify.go:221] Checking for updates...
	I1222 01:48:40.254130 2149446 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:48:40.257153 2149446 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:48:40.259997 2149446 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:40.262948 2149446 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:48:40.265828 2149446 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:48:40.268754 2149446 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:48:40.272145 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:40.272826 2149446 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:48:40.305841 2149446 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:48:40.306001 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.377826 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.367957332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.377936 2149446 docker.go:319] overlay module found
	I1222 01:48:40.381097 2149446 out.go:179] * Using the docker driver based on existing profile
	I1222 01:48:40.383847 2149446 start.go:309] selected driver: docker
	I1222 01:48:40.383862 2149446 start.go:928] validating driver "docker" against &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.383955 2149446 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:48:40.384655 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.450570 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.441705374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.450907 2149446 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:48:40.450934 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:40.450985 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:40.451021 2149446 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.454057 2149446 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:48:40.456938 2149446 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:48:40.459827 2149446 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:48:40.462746 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:40.462788 2149446 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:48:40.462800 2149446 cache.go:65] Caching tarball of preloaded images
	I1222 01:48:40.462841 2149446 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:48:40.462889 2149446 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:48:40.462900 2149446 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:48:40.463007 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.482525 2149446 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:48:40.482547 2149446 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:48:40.482567 2149446 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:48:40.482597 2149446 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:48:40.482656 2149446 start.go:364] duration metric: took 36.504µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:48:40.482682 2149446 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:48:40.482690 2149446 fix.go:54] fixHost starting: 
	I1222 01:48:40.482953 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.499420 2149446 fix.go:112] recreateIfNeeded on newest-cni-704217: state=Stopped err=<nil>
	W1222 01:48:40.499450 2149446 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:48:40.502622 2149446 out.go:252] * Restarting existing docker container for "newest-cni-704217" ...
	I1222 01:48:40.502704 2149446 cli_runner.go:164] Run: docker start newest-cni-704217
	I1222 01:48:40.772557 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.795810 2149446 kic.go:430] container "newest-cni-704217" state is running.
	I1222 01:48:40.796193 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:40.819335 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.819554 2149446 machine.go:94] provisionDockerMachine start ...
	I1222 01:48:40.819841 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:40.841916 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:40.842255 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:40.842265 2149446 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:48:40.844138 2149446 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:48:43.976885 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:43.976911 2149446 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:48:43.977014 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:43.994105 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:43.994420 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:43.994436 2149446 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:48:44.140605 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:44.140693 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.161016 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.161373 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.161391 2149446 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:48:44.293770 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:48:44.293859 2149446 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:48:44.293905 2149446 ubuntu.go:190] setting up certificates
	I1222 01:48:44.293936 2149446 provision.go:84] configureAuth start
	I1222 01:48:44.294013 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:44.316482 2149446 provision.go:143] copyHostCerts
	I1222 01:48:44.316547 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:48:44.316556 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:48:44.316633 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:48:44.316728 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:48:44.316733 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:48:44.316759 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:48:44.316806 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:48:44.316811 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:48:44.316833 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:48:44.316876 2149446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:48:44.455342 2149446 provision.go:177] copyRemoteCerts
	I1222 01:48:44.455417 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:48:44.455467 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.474519 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:44.573438 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:48:44.590713 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:48:44.609690 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:48:44.627314 2149446 provision.go:87] duration metric: took 333.343615ms to configureAuth
	I1222 01:48:44.627344 2149446 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:48:44.627539 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:44.627653 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.649833 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.650146 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.650167 2149446 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:48:44.926182 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:48:44.926204 2149446 machine.go:97] duration metric: took 4.106640691s to provisionDockerMachine
	I1222 01:48:44.926234 2149446 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:48:44.926251 2149446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:48:44.926311 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:48:44.926356 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.946595 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.045827 2149446 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:48:45.062421 2149446 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:48:45.062452 2149446 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:48:45.062466 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:48:45.062535 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:48:45.062625 2149446 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:48:45.062747 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:48:45.075486 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:45.105717 2149446 start.go:296] duration metric: took 179.465684ms for postStartSetup
	I1222 01:48:45.105860 2149446 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:48:45.105935 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.131206 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	W1222 01:48:42.097949 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:44.597158 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:45.246727 2149446 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:48:45.256769 2149446 fix.go:56] duration metric: took 4.774070481s for fixHost
	I1222 01:48:45.256799 2149446 start.go:83] releasing machines lock for "newest-cni-704217", held for 4.774128563s
	I1222 01:48:45.256887 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:45.279239 2149446 ssh_runner.go:195] Run: cat /version.json
	I1222 01:48:45.279292 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.279295 2149446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:48:45.279352 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.302892 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.315026 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.495990 2149446 ssh_runner.go:195] Run: systemctl --version
	I1222 01:48:45.502719 2149446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:48:45.539801 2149446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:48:45.545121 2149446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:48:45.545189 2149446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:48:45.553556 2149446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:48:45.553582 2149446 start.go:496] detecting cgroup driver to use...
	I1222 01:48:45.553612 2149446 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:48:45.553661 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:48:45.568706 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:48:45.581738 2149446 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:48:45.581818 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:48:45.597902 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:48:45.614693 2149446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:48:45.735512 2149446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:48:45.846918 2149446 docker.go:234] disabling docker service ...
	I1222 01:48:45.847000 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:48:45.861997 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:48:45.874967 2149446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:48:45.982721 2149446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:48:46.093378 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:48:46.106471 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:48:46.120288 2149446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:48:46.120369 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.130327 2149446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:48:46.130417 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.139300 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.148385 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.157463 2149446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:48:46.165914 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.175804 2149446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.185241 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.194290 2149446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:48:46.201953 2149446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:48:46.209542 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.327150 2149446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:48:46.504853 2149446 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:48:46.504948 2149446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:48:46.509123 2149446 start.go:564] Will wait 60s for crictl version
	I1222 01:48:46.509240 2149446 ssh_runner.go:195] Run: which crictl
	I1222 01:48:46.513227 2149446 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:48:46.539190 2149446 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:48:46.539329 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.570401 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.603161 2149446 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:48:46.606112 2149446 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:48:46.622587 2149446 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:48:46.626479 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.639159 2149446 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:48:46.642073 2149446 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:48:46.642257 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:46.642336 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.682401 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.682426 2149446 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:48:46.682483 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.712357 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.712379 2149446 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:48:46.712388 2149446 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:48:46.712483 2149446 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:48:46.712573 2149446 ssh_runner.go:195] Run: crio config
	I1222 01:48:46.797974 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:46.798006 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:46.798025 2149446 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:48:46.798055 2149446 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:48:46.798231 2149446 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:48:46.798309 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:48:46.806374 2149446 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:48:46.806461 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:48:46.814198 2149446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:48:46.828274 2149446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:48:46.841483 2149446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:48:46.854374 2149446 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:48:46.857976 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.868061 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.994841 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:47.017882 2149446 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:48:47.017947 2149446 certs.go:195] generating shared ca certs ...
	I1222 01:48:47.017976 2149446 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.018155 2149446 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:48:47.018235 2149446 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:48:47.018271 2149446 certs.go:257] generating profile certs ...
	I1222 01:48:47.018410 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:48:47.018519 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:48:47.018584 2149446 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:48:47.018730 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:48:47.018786 2149446 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:48:47.018810 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:48:47.018867 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:48:47.018918 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:48:47.018973 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:48:47.019053 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:47.019666 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:48:47.038515 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:48:47.057188 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:48:47.080288 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:48:47.101214 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:48:47.120685 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:48:47.140350 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:48:47.160406 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:48:47.178292 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:48:47.195480 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:48:47.213647 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:48:47.231427 2149446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:48:47.244572 2149446 ssh_runner.go:195] Run: openssl version
	I1222 01:48:47.250948 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.258307 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:48:47.265613 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269481 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.314712 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:48:47.322028 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.329298 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:48:47.336716 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340415 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340535 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.381670 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:48:47.389031 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.396101 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:48:47.403471 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407346 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.449487 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:48:47.456987 2149446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:48:47.460641 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:48:47.501381 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:48:47.542911 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:48:47.587141 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:48:47.628220 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:48:47.670351 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:48:47.715724 2149446 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:47.715830 2149446 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:48:47.715926 2149446 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:48:47.743013 2149446 cri.go:96] found id: ""
	I1222 01:48:47.743133 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:48:47.750858 2149446 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:48:47.750922 2149446 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:48:47.750979 2149446 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:48:47.758109 2149446 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:48:47.758748 2149446 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.759013 2149446 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-704217" cluster setting kubeconfig missing "newest-cni-704217" context setting]
	I1222 01:48:47.759473 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.760858 2149446 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:48:47.768601 2149446 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:48:47.768668 2149446 kubeadm.go:602] duration metric: took 17.733602ms to restartPrimaryControlPlane
	I1222 01:48:47.768685 2149446 kubeadm.go:403] duration metric: took 52.971018ms to StartCluster
	I1222 01:48:47.768701 2149446 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.768757 2149446 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.769685 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.769905 2149446 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:48:47.770207 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:47.770291 2149446 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:48:47.770386 2149446 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-704217"
	I1222 01:48:47.770417 2149446 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-704217"
	I1222 01:48:47.770506 2149446 addons.go:70] Setting dashboard=true in profile "newest-cni-704217"
	I1222 01:48:47.770541 2149446 addons.go:70] Setting default-storageclass=true in profile "newest-cni-704217"
	I1222 01:48:47.770547 2149446 addons.go:239] Setting addon dashboard=true in "newest-cni-704217"
	W1222 01:48:47.770588 2149446 addons.go:248] addon dashboard should already be in state true
	I1222 01:48:47.770610 2149446 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-704217"
	I1222 01:48:47.770621 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.770999 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.771324 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.770518 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.772038 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.775384 2149446 out.go:179] * Verifying Kubernetes components...
	I1222 01:48:47.778350 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:47.802410 2149446 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:48:47.805256 2149446 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:48:47.807963 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:48:47.807985 2149446 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:48:47.808059 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.828399 2149446 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:48:47.831382 2149446 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:47.831415 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:48:47.831508 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.834144 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.844610 2149446 addons.go:239] Setting addon default-storageclass=true in "newest-cni-704217"
	I1222 01:48:47.844670 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.845187 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.877451 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.891162 2149446 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:47.891205 2149446 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:48:47.891280 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.926288 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:48.042819 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:48.112488 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:48:48.112513 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:48:48.128623 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:48:48.128699 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:48:48.148283 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:48:48.148321 2149446 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:48:48.156766 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:48.161900 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:48.167407 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:48:48.167429 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:48:48.203144 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:48:48.203175 2149446 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:48:48.254999 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:48:48.255073 2149446 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:48:48.269219 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:48:48.269248 2149446 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:48:48.282496 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:48:48.282561 2149446 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:48:48.295350 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.295416 2149446 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:48:48.308102 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.846401 2149446 api_server.go:52] waiting for apiserver process to appear ...
	W1222 01:48:48.846466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:48.846520 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:48.846522 2149446 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846636 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846839 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.013510 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:49.073085 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.109119 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:49.170278 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.173753 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.230752 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.347104 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:49.363263 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:49.431947 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.591271 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:49.624743 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.676288 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.711818 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.847130 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.002166 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:50.073384 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.206474 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:46.597453 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:49.097694 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:50.265056 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.347283 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.486129 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:50.543791 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.847346 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.143881 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:51.205787 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.283923 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:51.341457 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.542158 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:51.609543 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.847044 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.045399 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:52.115058 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.347521 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.607702 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:52.683195 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.846571 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.305882 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:53.347187 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:53.376466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.687056 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:53.754007 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.847202 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.994728 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:54.057241 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:54.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:54.846622 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:51.596668 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:53.597225 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:55.347576 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:55.800828 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:55.847504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:55.860654 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.347301 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:56.472869 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:56.531116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.846537 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.346757 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.706997 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:57.770163 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:57.847151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.347452 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.521177 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:58.583041 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:58.847501 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.347147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.847527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.918746 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:59.977904 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.192217 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:55.597516 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:57.597678 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.100741 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.333722 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.346752 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:00.847514 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.346636 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.846688 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.347266 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.846533 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.346590 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.346607 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.761472 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:49:04.773029 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:49:04.847157 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:04.857098 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.857138 2149446 retry.go:84] will retry after 7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:04.857185 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.879231 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:04.950615 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:02.596826 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:05.096792 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:05.347388 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:05.847467 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.346572 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.846691 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.347421 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.347439 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.847592 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.847473 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:07.596780 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:09.597714 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:10.346839 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:10.642468 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:10.708302 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.708346 2149446 retry.go:84] will retry after 9.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.847493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.346650 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846502 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:11.932894 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:12.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:12.847370 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.346723 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.846605 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.347318 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.846610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:12.096720 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:14.596604 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:15.276803 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:15.337116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.337152 2149446 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.347320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:15.847453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.846547 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.347494 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.847450 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.347436 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.846663 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.347534 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.846595 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:16.596756 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:19.096677 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:20.347375 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:20.550393 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:20.629907 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.629948 2149446 retry.go:84] will retry after 13.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.847352 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.347359 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.847541 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.346619 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.846606 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.347285 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.847257 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.346695 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.847195 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:21.096800 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:23.596617 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:25.347322 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:25.847511 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.346649 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.847539 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.261998 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:27.328263 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.328301 2149446 retry.go:84] will retry after 13.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.347392 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.847138 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.346635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.547391 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:28.610028 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:28.847445 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.346661 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.847366 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:25.596765 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:28.096729 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:30.346864 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:30.846648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.846643 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.846656 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.346602 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.846639 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.494891 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:34.556245 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.556282 2149446 retry.go:84] will retry after 24.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.846617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:30.596693 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:33.096665 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:35.596759 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:38.096842 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:38.596364 2142162 node_ready.go:38] duration metric: took 6m0.000147881s for node "no-preload-491054" to be "Ready" ...
	I1222 01:49:38.599511 2142162 out.go:203] 
	W1222 01:49:38.602362 2142162 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:49:38.602381 2142162 out.go:285] * 
	W1222 01:49:38.604626 2142162 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:49:38.608399 2142162 out.go:203] 
	I1222 01:49:35.347129 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:35.846681 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:36.346565 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:36.846669 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:37.347286 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:37.846895 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:38.346662 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:38.847012 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:39.347115 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:39.846591 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:40.347475 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:40.735554 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:40.795474 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:40.795515 2149446 retry.go:84] will retry after 21.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:40.846792 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:41.172727 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:41.231783 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:41.347078 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:41.846609 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:42.346631 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:42.847610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:43.346556 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:43.847443 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:44.347395 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:44.847490 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:45.346921 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:45.846697 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:46.346758 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:46.847524 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:47.346683 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:47.846830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:47.846917 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:47.875900 2149446 cri.go:96] found id: ""
	I1222 01:49:47.875920 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.875929 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:47.875935 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:47.875999 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:47.911877 2149446 cri.go:96] found id: ""
	I1222 01:49:47.911899 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.911908 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:47.911914 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:47.911982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:47.941469 2149446 cri.go:96] found id: ""
	I1222 01:49:47.941492 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.941500 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:47.941507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:47.941566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:47.967048 2149446 cri.go:96] found id: ""
	I1222 01:49:47.967088 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.967097 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:47.967104 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:47.967180 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:47.995110 2149446 cri.go:96] found id: ""
	I1222 01:49:47.995135 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.995144 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:47.995150 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:47.995212 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:48.023445 2149446 cri.go:96] found id: ""
	I1222 01:49:48.023513 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.023525 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:48.023533 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:48.023607 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:48.051956 2149446 cri.go:96] found id: ""
	I1222 01:49:48.052023 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.052044 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:48.052063 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:48.052151 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:48.079789 2149446 cri.go:96] found id: ""
	I1222 01:49:48.079817 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.079827 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:48.079836 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:48.079847 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:48.148881 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:48.148917 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:48.166528 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:48.166562 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:48.230410 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:48.221856    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.222556    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224311    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224847    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.226455    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:48.221856    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.222556    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224311    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224847    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.226455    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:48.230487 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:48.230509 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:48.259694 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:48.259729 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:50.788374 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:50.799012 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:50.799080 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:50.824427 2149446 cri.go:96] found id: ""
	I1222 01:49:50.824451 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.824460 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:50.824467 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:50.824526 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:50.857602 2149446 cri.go:96] found id: ""
	I1222 01:49:50.857626 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.857636 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:50.857642 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:50.857700 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:50.886784 2149446 cri.go:96] found id: ""
	I1222 01:49:50.886809 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.886817 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:50.886824 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:50.886881 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:50.913209 2149446 cri.go:96] found id: ""
	I1222 01:49:50.913239 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.913249 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:50.913256 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:50.913362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:50.939349 2149446 cri.go:96] found id: ""
	I1222 01:49:50.939372 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.939381 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:50.939387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:50.939458 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:50.968002 2149446 cri.go:96] found id: ""
	I1222 01:49:50.968068 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.968080 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:50.968092 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:50.968182 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:50.992529 2149446 cri.go:96] found id: ""
	I1222 01:49:50.992596 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.992618 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:50.992638 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:50.992738 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:51.020072 2149446 cri.go:96] found id: ""
	I1222 01:49:51.020143 2149446 logs.go:282] 0 containers: []
	W1222 01:49:51.020165 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:51.020185 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:51.020224 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:51.085768 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:51.085801 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:51.102878 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:51.102907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:51.169244 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:51.160719    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.161459    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163246    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163762    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.165256    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:51.160719    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.161459    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163246    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163762    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.165256    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:51.169268 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:51.169280 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:51.199236 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:51.199275 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:53.726478 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:53.736764 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:53.736833 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:53.761645 2149446 cri.go:96] found id: ""
	I1222 01:49:53.761673 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.761681 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:53.761693 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:53.761752 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:53.790798 2149446 cri.go:96] found id: ""
	I1222 01:49:53.790875 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.790892 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:53.790899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:53.790960 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:53.814874 2149446 cri.go:96] found id: ""
	I1222 01:49:53.814941 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.814956 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:53.814963 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:53.815022 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:53.839840 2149446 cri.go:96] found id: ""
	I1222 01:49:53.839906 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.839930 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:53.839950 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:53.840033 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:53.875780 2149446 cri.go:96] found id: ""
	I1222 01:49:53.875850 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.875870 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:53.875890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:53.875989 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:53.908372 2149446 cri.go:96] found id: ""
	I1222 01:49:53.908451 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.908473 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:53.908519 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:53.908603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:53.937905 2149446 cri.go:96] found id: ""
	I1222 01:49:53.937975 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.937989 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:53.937997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:53.938056 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:53.964017 2149446 cri.go:96] found id: ""
	I1222 01:49:53.964043 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.964052 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:53.964061 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:53.964072 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:53.980713 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:53.980743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:54.051694 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:54.042644    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.043169    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.044865    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.045545    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.047279    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:54.042644    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.043169    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.044865    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.045545    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.047279    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:54.051718 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:54.051732 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:54.081680 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:54.081715 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:54.113831 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:54.113858 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:56.682134 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:56.692686 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:56.692755 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:56.718335 2149446 cri.go:96] found id: ""
	I1222 01:49:56.718360 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.718369 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:56.718375 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:56.718433 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:56.750169 2149446 cri.go:96] found id: ""
	I1222 01:49:56.750199 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.750207 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:56.750214 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:56.750289 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:56.775009 2149446 cri.go:96] found id: ""
	I1222 01:49:56.775033 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.775043 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:56.775049 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:56.775118 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:56.800202 2149446 cri.go:96] found id: ""
	I1222 01:49:56.800226 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.800245 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:56.800253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:56.800317 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:56.825535 2149446 cri.go:96] found id: ""
	I1222 01:49:56.825559 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.825567 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:56.825574 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:56.825633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:56.860355 2149446 cri.go:96] found id: ""
	I1222 01:49:56.860379 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.860388 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:56.860395 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:56.860455 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:56.900413 2149446 cri.go:96] found id: ""
	I1222 01:49:56.900481 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.900502 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:56.900520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:56.900608 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:56.930278 2149446 cri.go:96] found id: ""
	I1222 01:49:56.930343 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.930365 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:56.930381 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:56.930394 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:56.997740 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:56.997776 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:57.016721 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:57.016750 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:57.084545 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:57.076568    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.077143    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.078617    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.079164    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.080737    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:57.076568    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.077143    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.078617    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.079164    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.080737    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:57.084566 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:57.084580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:57.112759 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:57.112793 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:58.183323 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:58.246165 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:58.246322 2149446 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:49:59.087252 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:59.163517 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:59.163599 2149446 retry.go:84] will retry after 41.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:59.646777 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:59.657009 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:59.657077 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:59.682535 2149446 cri.go:96] found id: ""
	I1222 01:49:59.682562 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.682580 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:59.682587 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:59.682685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:59.712155 2149446 cri.go:96] found id: ""
	I1222 01:49:59.712179 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.712194 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:59.712201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:59.712313 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:59.736832 2149446 cri.go:96] found id: ""
	I1222 01:49:59.736859 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.736868 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:59.736884 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:59.736987 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:59.767806 2149446 cri.go:96] found id: ""
	I1222 01:49:59.767874 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.767897 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:59.767917 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:59.767991 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:59.795439 2149446 cri.go:96] found id: ""
	I1222 01:49:59.795504 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.795529 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:59.795549 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:59.795616 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:59.819847 2149446 cri.go:96] found id: ""
	I1222 01:49:59.819882 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.819892 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:59.819899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:59.819966 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:59.850480 2149446 cri.go:96] found id: ""
	I1222 01:49:59.850509 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.850518 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:59.850524 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:59.850592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:59.884120 2149446 cri.go:96] found id: ""
	I1222 01:49:59.884144 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.884153 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:59.884161 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:59.884173 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:59.957806 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:59.957845 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:59.974377 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:59.974406 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:00.101182 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:00.069859    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.070359    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.072368    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.077118    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.078499    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:00.069859    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.070359    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.072368    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.077118    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.078499    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:00.101220 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:00.101234 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:00.179956 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:00.180075 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:02.382663 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:50:02.449495 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:50:02.449536 2149446 retry.go:84] will retry after 44.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:50:02.730117 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:02.740995 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:02.741072 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:02.768772 2149446 cri.go:96] found id: ""
	I1222 01:50:02.768797 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.768805 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:02.768812 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:02.768871 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:02.794287 2149446 cri.go:96] found id: ""
	I1222 01:50:02.794311 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.794319 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:02.794326 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:02.794389 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:02.819466 2149446 cri.go:96] found id: ""
	I1222 01:50:02.819546 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.819568 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:02.819586 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:02.819669 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:02.853024 2149446 cri.go:96] found id: ""
	I1222 01:50:02.853053 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.853062 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:02.853068 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:02.853137 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:02.880476 2149446 cri.go:96] found id: ""
	I1222 01:50:02.880500 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.880509 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:02.880522 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:02.880594 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:02.911828 2149446 cri.go:96] found id: ""
	I1222 01:50:02.911852 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.911861 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:02.911868 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:02.911945 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:02.938488 2149446 cri.go:96] found id: ""
	I1222 01:50:02.938567 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.938589 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:02.938610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:02.938701 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:02.963720 2149446 cri.go:96] found id: ""
	I1222 01:50:02.963747 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.963756 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:02.963765 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:02.963777 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:03.029961 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:03.030002 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:03.047145 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:03.047175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:03.113634 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:03.105215    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.105984    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.107648    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.108298    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.109821    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:03.105215    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.105984    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.107648    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.108298    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.109821    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:03.113655 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:03.113667 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:03.143385 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:03.143421 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:05.679417 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:05.690070 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:05.690144 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:05.715488 2149446 cri.go:96] found id: ""
	I1222 01:50:05.715513 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.715531 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:05.715538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:05.715604 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:05.746380 2149446 cri.go:96] found id: ""
	I1222 01:50:05.746456 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.746478 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:05.746492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:05.746570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:05.770990 2149446 cri.go:96] found id: ""
	I1222 01:50:05.771014 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.771023 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:05.771030 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:05.771092 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:05.796198 2149446 cri.go:96] found id: ""
	I1222 01:50:05.796220 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.796228 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:05.796235 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:05.796293 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:05.823560 2149446 cri.go:96] found id: ""
	I1222 01:50:05.823587 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.823595 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:05.823601 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:05.823660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:05.856315 2149446 cri.go:96] found id: ""
	I1222 01:50:05.856349 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.856359 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:05.856365 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:05.856425 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:05.895348 2149446 cri.go:96] found id: ""
	I1222 01:50:05.895375 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.895383 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:05.895390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:05.895452 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:05.924398 2149446 cri.go:96] found id: ""
	I1222 01:50:05.924466 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.924478 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:05.924487 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:05.924499 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:05.952966 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:05.953003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:05.984569 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:05.984596 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:06.051373 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:06.051410 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:06.069275 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:06.069345 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:06.136767 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:06.127838    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.128596    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.130370    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.131018    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.132875    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:06.127838    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.128596    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.130370    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.131018    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.132875    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:08.637440 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:08.649276 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:08.649375 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:08.676353 2149446 cri.go:96] found id: ""
	I1222 01:50:08.676377 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.676386 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:08.676393 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:08.676457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:08.702672 2149446 cri.go:96] found id: ""
	I1222 01:50:08.702696 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.702705 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:08.702712 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:08.702773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:08.733007 2149446 cri.go:96] found id: ""
	I1222 01:50:08.733029 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.733038 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:08.733044 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:08.733108 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:08.758339 2149446 cri.go:96] found id: ""
	I1222 01:50:08.758363 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.758371 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:08.758378 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:08.758437 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:08.783187 2149446 cri.go:96] found id: ""
	I1222 01:50:08.783210 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.783219 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:08.783225 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:08.783290 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:08.813005 2149446 cri.go:96] found id: ""
	I1222 01:50:08.813028 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.813036 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:08.813042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:08.813102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:08.840050 2149446 cri.go:96] found id: ""
	I1222 01:50:08.840073 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.840081 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:08.840087 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:08.840232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:08.879901 2149446 cri.go:96] found id: ""
	I1222 01:50:08.879976 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.880013 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:08.880039 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:08.880067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:08.955189 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:08.945414    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.946257    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948041    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948826    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.950714    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:08.945414    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.946257    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948041    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948826    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.950714    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:08.955272 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:08.955297 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:08.983942 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:08.983976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:09.023107 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:09.023137 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:09.089046 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:09.089084 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:11.605972 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:11.618258 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:11.618329 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:11.663499 2149446 cri.go:96] found id: ""
	I1222 01:50:11.663523 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.663531 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:11.663538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:11.663603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:11.693496 2149446 cri.go:96] found id: ""
	I1222 01:50:11.693524 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.693533 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:11.693539 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:11.693610 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:11.719147 2149446 cri.go:96] found id: ""
	I1222 01:50:11.719173 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.719183 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:11.719189 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:11.719256 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:11.744820 2149446 cri.go:96] found id: ""
	I1222 01:50:11.744845 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.744854 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:11.744860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:11.744919 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:11.770539 2149446 cri.go:96] found id: ""
	I1222 01:50:11.770566 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.770575 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:11.770582 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:11.770643 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:11.796499 2149446 cri.go:96] found id: ""
	I1222 01:50:11.796522 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.796531 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:11.796538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:11.796596 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:11.821798 2149446 cri.go:96] found id: ""
	I1222 01:50:11.821822 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.821831 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:11.821837 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:11.821901 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:11.855646 2149446 cri.go:96] found id: ""
	I1222 01:50:11.855668 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.855677 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:11.855685 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:11.855697 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:11.897110 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:11.897191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:11.968365 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:11.968400 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:11.985862 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:11.985890 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:12.057003 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:12.047720    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.048930    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.049816    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.050506    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.052441    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:12.047720    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.048930    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.049816    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.050506    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.052441    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:12.057028 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:12.057044 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:14.587717 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:14.601824 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:14.601894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:14.633615 2149446 cri.go:96] found id: ""
	I1222 01:50:14.633639 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.633651 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:14.633667 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:14.633734 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:14.665995 2149446 cri.go:96] found id: ""
	I1222 01:50:14.666024 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.666034 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:14.666040 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:14.666104 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:14.694750 2149446 cri.go:96] found id: ""
	I1222 01:50:14.694832 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.694846 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:14.694854 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:14.694914 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:14.721367 2149446 cri.go:96] found id: ""
	I1222 01:50:14.721391 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.721401 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:14.721407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:14.721465 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:14.746783 2149446 cri.go:96] found id: ""
	I1222 01:50:14.746864 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.746880 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:14.746887 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:14.746951 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:14.776055 2149446 cri.go:96] found id: ""
	I1222 01:50:14.776078 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.776087 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:14.776094 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:14.776154 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:14.802137 2149446 cri.go:96] found id: ""
	I1222 01:50:14.802162 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.802171 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:14.802178 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:14.802236 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:14.827099 2149446 cri.go:96] found id: ""
	I1222 01:50:14.827176 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.827192 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:14.827203 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:14.827215 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:14.843942 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:14.843972 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:14.924047 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:14.915431    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.916077    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918044    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918580    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.920233    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:14.915431    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.916077    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918044    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918580    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.920233    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:14.924081 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:14.924094 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:14.952630 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:14.952666 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:14.981965 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:14.981992 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:17.554296 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:17.564949 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:17.565018 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:17.596428 2149446 cri.go:96] found id: ""
	I1222 01:50:17.596451 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.596465 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:17.596472 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:17.596531 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:17.629782 2149446 cri.go:96] found id: ""
	I1222 01:50:17.629804 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.629812 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:17.629820 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:17.629877 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:17.658278 2149446 cri.go:96] found id: ""
	I1222 01:50:17.658305 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.658313 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:17.658319 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:17.658384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:17.684159 2149446 cri.go:96] found id: ""
	I1222 01:50:17.684227 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.684248 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:17.684268 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:17.684395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:17.712612 2149446 cri.go:96] found id: ""
	I1222 01:50:17.712682 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.712713 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:17.712751 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:17.712835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:17.740669 2149446 cri.go:96] found id: ""
	I1222 01:50:17.740737 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.740758 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:17.740779 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:17.740868 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:17.767931 2149446 cri.go:96] found id: ""
	I1222 01:50:17.767998 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.768021 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:17.768042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:17.768129 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:17.793937 2149446 cri.go:96] found id: ""
	I1222 01:50:17.794002 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.794015 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:17.794025 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:17.794037 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:17.860940 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:17.860976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:17.885026 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:17.885106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:17.949991 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:17.942003    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.942521    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944226    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944614    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.946054    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:17.942003    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.942521    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944226    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944614    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.946054    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:17.950015 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:17.950028 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:17.978296 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:17.978329 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:20.513236 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:20.523539 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:20.523613 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:20.548569 2149446 cri.go:96] found id: ""
	I1222 01:50:20.548592 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.548600 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:20.548607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:20.548670 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:20.573066 2149446 cri.go:96] found id: ""
	I1222 01:50:20.573145 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.573167 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:20.573185 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:20.573276 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:20.604633 2149446 cri.go:96] found id: ""
	I1222 01:50:20.604659 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.604668 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:20.604675 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:20.604735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:20.635610 2149446 cri.go:96] found id: ""
	I1222 01:50:20.635636 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.635645 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:20.635651 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:20.635717 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:20.670374 2149446 cri.go:96] found id: ""
	I1222 01:50:20.670441 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.670455 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:20.670462 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:20.670524 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:20.699800 2149446 cri.go:96] found id: ""
	I1222 01:50:20.699822 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.699830 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:20.699837 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:20.699894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:20.728024 2149446 cri.go:96] found id: ""
	I1222 01:50:20.728048 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.728057 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:20.728063 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:20.728122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:20.758174 2149446 cri.go:96] found id: ""
	I1222 01:50:20.758198 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.758206 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:20.758215 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:20.758227 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:20.823612 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:20.823650 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:20.840051 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:20.840081 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:20.925259 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:20.915370    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.916028    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.918122    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.919738    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.921230    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:20.915370    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.916028    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.918122    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.919738    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.921230    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:20.925294 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:20.925307 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:20.955201 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:20.955234 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:23.483497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:23.494221 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:23.494298 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:23.523189 2149446 cri.go:96] found id: ""
	I1222 01:50:23.523214 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.523223 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:23.523229 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:23.523289 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:23.550633 2149446 cri.go:96] found id: ""
	I1222 01:50:23.550656 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.550666 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:23.550672 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:23.550730 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:23.576289 2149446 cri.go:96] found id: ""
	I1222 01:50:23.576312 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.576328 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:23.576335 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:23.576395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:23.611281 2149446 cri.go:96] found id: ""
	I1222 01:50:23.611304 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.611312 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:23.611320 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:23.611381 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:23.640498 2149446 cri.go:96] found id: ""
	I1222 01:50:23.640520 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.640529 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:23.640535 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:23.640595 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:23.674625 2149446 cri.go:96] found id: ""
	I1222 01:50:23.674649 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.674658 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:23.674665 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:23.674723 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:23.699746 2149446 cri.go:96] found id: ""
	I1222 01:50:23.699771 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.699780 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:23.699786 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:23.699849 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:23.724314 2149446 cri.go:96] found id: ""
	I1222 01:50:23.724345 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.724353 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:23.724369 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:23.724381 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:23.740816 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:23.740846 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:23.813467 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:23.804843    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.805658    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807401    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807908    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.809520    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:23.804843    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.805658    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807401    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807908    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.809520    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:23.813488 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:23.813501 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:23.844492 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:23.844529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:23.883925 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:23.884003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:26.466430 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:26.480754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:26.480834 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:26.505742 2149446 cri.go:96] found id: ""
	I1222 01:50:26.505825 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.505856 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:26.505876 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:26.505986 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:26.532705 2149446 cri.go:96] found id: ""
	I1222 01:50:26.532781 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.532816 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:26.532839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:26.532932 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:26.557032 2149446 cri.go:96] found id: ""
	I1222 01:50:26.557116 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.557139 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:26.557170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:26.557247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:26.583725 2149446 cri.go:96] found id: ""
	I1222 01:50:26.583812 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.583835 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:26.583853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:26.583962 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:26.617024 2149446 cri.go:96] found id: ""
	I1222 01:50:26.617089 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.617121 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:26.617140 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:26.617246 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:26.646386 2149446 cri.go:96] found id: ""
	I1222 01:50:26.646464 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.646488 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:26.646507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:26.646612 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:26.678246 2149446 cri.go:96] found id: ""
	I1222 01:50:26.678269 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.678278 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:26.678285 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:26.678347 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:26.703982 2149446 cri.go:96] found id: ""
	I1222 01:50:26.704008 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.704017 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:26.704026 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:26.704042 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:26.720933 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:26.720994 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:26.786938 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:26.778772    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.779363    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781037    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781428    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.782886    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:26.778772    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.779363    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781037    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781428    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.782886    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:26.786961 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:26.786974 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:26.815331 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:26.815367 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:26.845602 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:26.845630 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:29.422527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:29.433711 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:29.433793 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:29.465497 2149446 cri.go:96] found id: ""
	I1222 01:50:29.465548 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.465558 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:29.465564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:29.465634 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:29.490894 2149446 cri.go:96] found id: ""
	I1222 01:50:29.490919 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.490927 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:29.490938 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:29.490998 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:29.520626 2149446 cri.go:96] found id: ""
	I1222 01:50:29.520661 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.520670 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:29.520677 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:29.520747 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:29.547848 2149446 cri.go:96] found id: ""
	I1222 01:50:29.547885 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.547893 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:29.547900 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:29.547982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:29.577023 2149446 cri.go:96] found id: ""
	I1222 01:50:29.577048 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.577057 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:29.577065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:29.577127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:29.611778 2149446 cri.go:96] found id: ""
	I1222 01:50:29.611814 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.611823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:29.611830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:29.611903 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:29.643842 2149446 cri.go:96] found id: ""
	I1222 01:50:29.643918 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.643941 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:29.643958 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:29.644044 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:29.674585 2149446 cri.go:96] found id: ""
	I1222 01:50:29.674658 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.674680 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:29.674704 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:29.674743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:29.703700 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:29.703743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:29.735551 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:29.735577 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:29.800624 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:29.800662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:29.817316 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:29.817364 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:29.907057 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:29.895787    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.896652    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898306    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898827    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.902956    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:29.895787    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.896652    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898306    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898827    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.902956    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:32.407335 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:32.417713 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:32.417782 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:32.443620 2149446 cri.go:96] found id: ""
	I1222 01:50:32.443648 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.443656 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:32.443662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:32.443722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:32.473803 2149446 cri.go:96] found id: ""
	I1222 01:50:32.473831 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.473840 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:32.473847 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:32.473908 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:32.503900 2149446 cri.go:96] found id: ""
	I1222 01:50:32.503928 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.503936 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:32.503943 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:32.504007 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:32.533317 2149446 cri.go:96] found id: ""
	I1222 01:50:32.533427 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.533436 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:32.533442 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:32.533503 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:32.558269 2149446 cri.go:96] found id: ""
	I1222 01:50:32.558292 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.558300 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:32.558307 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:32.558366 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:32.583954 2149446 cri.go:96] found id: ""
	I1222 01:50:32.583976 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.583984 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:32.583990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:32.584048 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:32.619938 2149446 cri.go:96] found id: ""
	I1222 01:50:32.619959 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.619967 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:32.619974 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:32.620032 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:32.646855 2149446 cri.go:96] found id: ""
	I1222 01:50:32.646876 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.646883 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:32.646892 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:32.646903 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:32.681292 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:32.681331 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:32.709454 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:32.709486 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:32.774478 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:32.774512 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:32.790947 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:32.790977 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:32.865716 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:32.856728    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.857643    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.858424    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.859539    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.860242    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:32.856728    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.857643    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.858424    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.859539    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.860242    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:35.366405 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:35.377118 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:35.377211 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:35.405034 2149446 cri.go:96] found id: ""
	I1222 01:50:35.405073 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.405083 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:35.405089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:35.405172 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:35.429958 2149446 cri.go:96] found id: ""
	I1222 01:50:35.429985 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.429994 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:35.430002 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:35.430063 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:35.455069 2149446 cri.go:96] found id: ""
	I1222 01:50:35.455094 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.455103 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:35.455112 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:35.455173 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:35.482985 2149446 cri.go:96] found id: ""
	I1222 01:50:35.483053 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.483075 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:35.483089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:35.483165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:35.506805 2149446 cri.go:96] found id: ""
	I1222 01:50:35.506874 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.506894 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:35.506909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:35.506983 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:35.532897 2149446 cri.go:96] found id: ""
	I1222 01:50:35.532925 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.532934 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:35.532941 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:35.533002 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:35.564747 2149446 cri.go:96] found id: ""
	I1222 01:50:35.564774 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.564783 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:35.564789 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:35.564858 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:35.589912 2149446 cri.go:96] found id: ""
	I1222 01:50:35.589939 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.589948 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:35.589956 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:35.589969 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:35.665301 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:35.665441 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:35.682348 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:35.682377 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:35.749489 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:35.741663    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.742347    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.743818    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.744264    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.745698    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:35.741663    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.742347    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.743818    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.744264    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.745698    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:35.749511 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:35.749524 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:35.776750 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:35.776782 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:38.306113 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:38.316656 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:38.316729 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:38.343191 2149446 cri.go:96] found id: ""
	I1222 01:50:38.343229 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.343238 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:38.343245 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:38.343311 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:38.371584 2149446 cri.go:96] found id: ""
	I1222 01:50:38.371609 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.371618 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:38.371625 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:38.371686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:38.399378 2149446 cri.go:96] found id: ""
	I1222 01:50:38.399401 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.399410 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:38.399416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:38.399476 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:38.424466 2149446 cri.go:96] found id: ""
	I1222 01:50:38.424492 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.424500 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:38.424513 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:38.424575 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:38.449419 2149446 cri.go:96] found id: ""
	I1222 01:50:38.449452 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.449461 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:38.449468 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:38.449533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:38.475321 2149446 cri.go:96] found id: ""
	I1222 01:50:38.475347 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.475356 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:38.475363 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:38.475423 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:38.499428 2149446 cri.go:96] found id: ""
	I1222 01:50:38.499458 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.499469 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:38.499476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:38.499541 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:38.524738 2149446 cri.go:96] found id: ""
	I1222 01:50:38.524766 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.524780 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:38.524789 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:38.524800 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:38.557984 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:38.558022 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:38.589548 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:38.589577 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:38.671416 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:38.671453 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:38.688135 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:38.688164 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:38.751396 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:38.743333    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.744008    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.745568    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.746050    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.747565    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:38.743333    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.744008    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.745568    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.746050    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.747565    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:40.899829 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:50:40.960744 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:50:40.960855 2149446 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:50:41.252211 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:41.263048 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:41.263123 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:41.296135 2149446 cri.go:96] found id: ""
	I1222 01:50:41.296159 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.296167 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:41.296174 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:41.296232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:41.322198 2149446 cri.go:96] found id: ""
	I1222 01:50:41.322277 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.322291 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:41.322299 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:41.322360 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:41.348725 2149446 cri.go:96] found id: ""
	I1222 01:50:41.348750 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.348760 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:41.348766 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:41.348825 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:41.374926 2149446 cri.go:96] found id: ""
	I1222 01:50:41.375005 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.375021 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:41.375029 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:41.375096 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:41.406446 2149446 cri.go:96] found id: ""
	I1222 01:50:41.406537 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.406556 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:41.406566 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:41.406648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:41.432212 2149446 cri.go:96] found id: ""
	I1222 01:50:41.432239 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.432247 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:41.432254 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:41.432321 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:41.463730 2149446 cri.go:96] found id: ""
	I1222 01:50:41.463755 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.463765 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:41.463771 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:41.463833 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:41.490157 2149446 cri.go:96] found id: ""
	I1222 01:50:41.490182 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.490193 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:41.490201 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:41.490213 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:41.558943 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:41.558981 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:41.576405 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:41.576435 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:41.666063 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:41.657908    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.658547    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660067    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660654    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.662156    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:41.657908    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.658547    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660067    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660654    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.662156    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:41.666083 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:41.666095 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:41.696072 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:41.696105 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:44.226469 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:44.236983 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:44.237055 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:44.261867 2149446 cri.go:96] found id: ""
	I1222 01:50:44.261892 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.261900 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:44.261907 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:44.261971 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:44.287438 2149446 cri.go:96] found id: ""
	I1222 01:50:44.287462 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.287471 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:44.287478 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:44.287536 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:44.313755 2149446 cri.go:96] found id: ""
	I1222 01:50:44.313779 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.313788 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:44.313794 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:44.313861 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:44.347563 2149446 cri.go:96] found id: ""
	I1222 01:50:44.347589 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.347597 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:44.347604 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:44.347688 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:44.377289 2149446 cri.go:96] found id: ""
	I1222 01:50:44.377313 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.377354 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:44.377362 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:44.377443 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:44.403181 2149446 cri.go:96] found id: ""
	I1222 01:50:44.403207 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.403215 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:44.403222 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:44.403280 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:44.432425 2149446 cri.go:96] found id: ""
	I1222 01:50:44.432449 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.432457 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:44.432465 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:44.432529 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:44.458001 2149446 cri.go:96] found id: ""
	I1222 01:50:44.458030 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.458039 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:44.458048 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:44.458089 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:44.485816 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:44.485844 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:44.552935 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:44.552970 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:44.569824 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:44.569852 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:44.652688 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:44.644560    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.645403    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647098    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647385    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.648885    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:44.644560    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.645403    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647098    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647385    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.648885    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:44.652760 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:44.652785 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:46.992640 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:50:47.054063 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:50:47.054189 2149446 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:50:47.057341 2149446 out.go:179] * Enabled addons: 
	I1222 01:50:47.060205 2149446 addons.go:530] duration metric: took 1m59.289939725s for enable addons: enabled=[]
	I1222 01:50:47.182147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:47.192818 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:47.192903 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:47.224024 2149446 cri.go:96] found id: ""
	I1222 01:50:47.224046 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.224055 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:47.224061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:47.224122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:47.250330 2149446 cri.go:96] found id: ""
	I1222 01:50:47.250357 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.250365 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:47.250372 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:47.250431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:47.275271 2149446 cri.go:96] found id: ""
	I1222 01:50:47.275294 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.275302 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:47.275309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:47.275370 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:47.300369 2149446 cri.go:96] found id: ""
	I1222 01:50:47.300394 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.300403 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:47.300409 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:47.300468 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:47.325147 2149446 cri.go:96] found id: ""
	I1222 01:50:47.325213 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.325234 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:47.325253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:47.325362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:47.353926 2149446 cri.go:96] found id: ""
	I1222 01:50:47.353953 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.353961 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:47.353968 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:47.354028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:47.378935 2149446 cri.go:96] found id: ""
	I1222 01:50:47.378961 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.378970 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:47.378977 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:47.379040 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:47.404019 2149446 cri.go:96] found id: ""
	I1222 01:50:47.404042 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.404051 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:47.404061 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:47.404078 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:47.473731 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:47.473766 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:47.490597 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:47.490625 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:47.557658 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:47.550047    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.550549    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552046    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552472    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.553897    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:47.550047    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.550549    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552046    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552472    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.553897    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:47.557681 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:47.557694 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:47.586853 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:47.586888 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:50.125437 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:50.138125 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:50.138195 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:50.166675 2149446 cri.go:96] found id: ""
	I1222 01:50:50.166698 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.166706 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:50.166712 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:50.166775 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:50.195252 2149446 cri.go:96] found id: ""
	I1222 01:50:50.195277 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.195285 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:50.195293 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:50.195354 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:50.221357 2149446 cri.go:96] found id: ""
	I1222 01:50:50.221390 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.221406 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:50.221413 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:50.221504 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:50.247114 2149446 cri.go:96] found id: ""
	I1222 01:50:50.247137 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.247145 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:50.247163 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:50.247221 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:50.272457 2149446 cri.go:96] found id: ""
	I1222 01:50:50.272482 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.272491 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:50.272497 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:50.272558 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:50.302435 2149446 cri.go:96] found id: ""
	I1222 01:50:50.302464 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.302473 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:50.302479 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:50.302554 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:50.325940 2149446 cri.go:96] found id: ""
	I1222 01:50:50.326010 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.326025 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:50.326033 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:50.326095 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:50.355131 2149446 cri.go:96] found id: ""
	I1222 01:50:50.355156 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.355167 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:50.355176 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:50.355219 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:50.421016 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:50.421050 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:50.438999 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:50.439026 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:50.505148 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:50.497150    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.497999    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499544    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499840    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.501315    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:50.497150    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.497999    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499544    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499840    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.501315    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:50.505167 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:50.505178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:50.533117 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:50.533151 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:53.063902 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:53.074689 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:53.074763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:53.105072 2149446 cri.go:96] found id: ""
	I1222 01:50:53.105098 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.105108 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:53.105114 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:53.105178 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:53.135898 2149446 cri.go:96] found id: ""
	I1222 01:50:53.135924 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.135933 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:53.135939 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:53.135999 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:53.167436 2149446 cri.go:96] found id: ""
	I1222 01:50:53.167461 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.167469 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:53.167476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:53.167559 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:53.194408 2149446 cri.go:96] found id: ""
	I1222 01:50:53.194523 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.194547 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:53.194566 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:53.194648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:53.219170 2149446 cri.go:96] found id: ""
	I1222 01:50:53.219232 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.219246 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:53.219259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:53.219319 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:53.245034 2149446 cri.go:96] found id: ""
	I1222 01:50:53.245059 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.245067 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:53.245074 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:53.245131 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:53.271853 2149446 cri.go:96] found id: ""
	I1222 01:50:53.271884 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.271893 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:53.271899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:53.271970 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:53.296192 2149446 cri.go:96] found id: ""
	I1222 01:50:53.296225 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.296234 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:53.296243 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:53.296259 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:53.363334 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:53.363372 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:53.381487 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:53.381517 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:53.448549 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:53.440319    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.440874    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.442515    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.443004    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.444491    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:53.440319    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.440874    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.442515    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.443004    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.444491    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:53.448571 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:53.448583 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:53.481685 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:53.481727 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:56.012872 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:56.025767 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:56.025845 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:56.053844 2149446 cri.go:96] found id: ""
	I1222 01:50:56.053873 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.053882 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:56.053890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:56.053961 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:56.082849 2149446 cri.go:96] found id: ""
	I1222 01:50:56.082873 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.082882 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:56.082888 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:56.082964 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:56.116254 2149446 cri.go:96] found id: ""
	I1222 01:50:56.116336 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.116359 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:56.116379 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:56.116489 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:56.157611 2149446 cri.go:96] found id: ""
	I1222 01:50:56.157636 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.157644 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:56.157662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:56.157721 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:56.183024 2149446 cri.go:96] found id: ""
	I1222 01:50:56.183102 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.183123 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:56.183137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:56.183214 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:56.211789 2149446 cri.go:96] found id: ""
	I1222 01:50:56.211814 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.211823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:56.211830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:56.211888 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:56.238212 2149446 cri.go:96] found id: ""
	I1222 01:50:56.238286 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.238309 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:56.238332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:56.238421 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:56.269053 2149446 cri.go:96] found id: ""
	I1222 01:50:56.269079 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.269088 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:56.269097 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:56.269110 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:56.339489 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:56.339527 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:56.356488 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:56.356627 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:56.423568 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:56.415753    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.416507    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418088    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418397    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.419880    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:56.415753    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.416507    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418088    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418397    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.419880    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:56.423639 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:56.423664 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:56.452827 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:56.452859 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:58.984493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:58.994909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:58.994982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:59.022329 2149446 cri.go:96] found id: ""
	I1222 01:50:59.022354 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.022363 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:59.022369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:59.022429 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:59.047878 2149446 cri.go:96] found id: ""
	I1222 01:50:59.047901 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.047909 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:59.047915 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:59.047975 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:59.072596 2149446 cri.go:96] found id: ""
	I1222 01:50:59.072622 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.072631 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:59.072637 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:59.072696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:59.104362 2149446 cri.go:96] found id: ""
	I1222 01:50:59.104391 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.104400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:59.104406 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:59.104470 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:59.143935 2149446 cri.go:96] found id: ""
	I1222 01:50:59.143960 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.143970 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:59.143977 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:59.144037 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:59.172279 2149446 cri.go:96] found id: ""
	I1222 01:50:59.172305 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.172314 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:59.172321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:59.172390 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:59.197781 2149446 cri.go:96] found id: ""
	I1222 01:50:59.197804 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.197813 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:59.197819 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:59.197879 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:59.223417 2149446 cri.go:96] found id: ""
	I1222 01:50:59.223441 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.223450 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:59.223459 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:59.223470 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:59.289154 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:59.289191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:59.306341 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:59.306372 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:59.371286 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:59.363578    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.364246    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.365700    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.366151    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.367566    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:59.363578    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.364246    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.365700    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.366151    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.367566    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:59.371309 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:59.371321 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:59.399863 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:59.399898 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:01.927120 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:01.939501 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:01.939576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:01.967293 2149446 cri.go:96] found id: ""
	I1222 01:51:01.967326 2149446 logs.go:282] 0 containers: []
	W1222 01:51:01.967336 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:01.967343 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:01.967430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:02.010081 2149446 cri.go:96] found id: ""
	I1222 01:51:02.010124 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.010134 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:02.010141 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:02.010231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:02.037304 2149446 cri.go:96] found id: ""
	I1222 01:51:02.037354 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.037363 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:02.037369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:02.037430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:02.065893 2149446 cri.go:96] found id: ""
	I1222 01:51:02.065917 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.065926 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:02.065932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:02.065992 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:02.098337 2149446 cri.go:96] found id: ""
	I1222 01:51:02.098366 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.098375 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:02.098383 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:02.098448 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:02.137900 2149446 cri.go:96] found id: ""
	I1222 01:51:02.137929 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.137939 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:02.137946 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:02.138104 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:02.170824 2149446 cri.go:96] found id: ""
	I1222 01:51:02.170852 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.170861 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:02.170868 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:02.170947 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:02.206007 2149446 cri.go:96] found id: ""
	I1222 01:51:02.206032 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.206041 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:02.206050 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:02.206062 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:02.273536 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:02.273574 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:02.290756 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:02.290788 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:02.358784 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:02.350805    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.351393    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.352892    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.353378    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.355003    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:02.350805    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.351393    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.352892    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.353378    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.355003    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:02.358806 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:02.358821 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:02.386732 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:02.386772 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:04.916635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:04.927272 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:04.927352 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:04.952318 2149446 cri.go:96] found id: ""
	I1222 01:51:04.952350 2149446 logs.go:282] 0 containers: []
	W1222 01:51:04.952358 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:04.952364 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:04.952424 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:04.980287 2149446 cri.go:96] found id: ""
	I1222 01:51:04.980309 2149446 logs.go:282] 0 containers: []
	W1222 01:51:04.980317 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:04.980338 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:04.980398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:05.011897 2149446 cri.go:96] found id: ""
	I1222 01:51:05.011975 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.011998 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:05.012019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:05.012114 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:05.039439 2149446 cri.go:96] found id: ""
	I1222 01:51:05.039464 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.039473 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:05.039480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:05.039562 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:05.065276 2149446 cri.go:96] found id: ""
	I1222 01:51:05.065309 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.065318 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:05.065352 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:05.065421 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:05.104171 2149446 cri.go:96] found id: ""
	I1222 01:51:05.104250 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.104272 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:05.104293 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:05.104397 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:05.140837 2149446 cri.go:96] found id: ""
	I1222 01:51:05.140872 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.140882 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:05.140889 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:05.140962 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:05.174413 2149446 cri.go:96] found id: ""
	I1222 01:51:05.174439 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.174448 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:05.174457 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:05.174469 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:05.241273 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:05.232831    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.233309    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235135    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235674    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.237437    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:05.232831    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.233309    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235135    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235674    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.237437    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:05.241367 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:05.241395 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:05.269917 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:05.269947 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:05.298262 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:05.298289 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:05.366093 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:05.366127 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:07.884446 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:07.897419 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:07.897506 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:07.923443 2149446 cri.go:96] found id: ""
	I1222 01:51:07.923466 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.923474 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:07.923481 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:07.923538 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:07.949573 2149446 cri.go:96] found id: ""
	I1222 01:51:07.949595 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.949604 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:07.949610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:07.949672 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:07.974625 2149446 cri.go:96] found id: ""
	I1222 01:51:07.974648 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.974657 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:07.974664 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:07.974722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:08.000701 2149446 cri.go:96] found id: ""
	I1222 01:51:08.000729 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.000738 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:08.000746 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:08.000823 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:08.030073 2149446 cri.go:96] found id: ""
	I1222 01:51:08.030098 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.030106 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:08.030113 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:08.030184 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:08.056629 2149446 cri.go:96] found id: ""
	I1222 01:51:08.056665 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.056675 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:08.056682 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:08.056742 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:08.086264 2149446 cri.go:96] found id: ""
	I1222 01:51:08.086299 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.086308 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:08.086320 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:08.086393 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:08.116679 2149446 cri.go:96] found id: ""
	I1222 01:51:08.116709 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.116719 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:08.116728 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:08.116748 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:08.157494 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:08.157521 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:08.233683 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:08.233771 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:08.250905 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:08.250934 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:08.316451 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:08.307255    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.308718    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.309622    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311377    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311630    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:08.307255    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.308718    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.309622    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311377    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311630    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:08.316472 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:08.316484 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:10.846347 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:10.861517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:10.861699 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:10.899351 2149446 cri.go:96] found id: ""
	I1222 01:51:10.899458 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.899504 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:10.899543 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:10.899659 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:10.929750 2149446 cri.go:96] found id: ""
	I1222 01:51:10.929818 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.929839 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:10.929851 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:10.929929 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:10.960575 2149446 cri.go:96] found id: ""
	I1222 01:51:10.960610 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.960619 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:10.960626 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:10.960698 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:10.985907 2149446 cri.go:96] found id: ""
	I1222 01:51:10.985932 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.985940 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:10.985947 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:10.986010 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:11.014799 2149446 cri.go:96] found id: ""
	I1222 01:51:11.014878 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.014893 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:11.014901 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:11.014965 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:11.040928 2149446 cri.go:96] found id: ""
	I1222 01:51:11.040960 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.040970 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:11.040976 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:11.041078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:11.067246 2149446 cri.go:96] found id: ""
	I1222 01:51:11.067327 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.067351 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:11.067365 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:11.067442 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:11.095605 2149446 cri.go:96] found id: ""
	I1222 01:51:11.095631 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.095640 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:11.095650 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:11.095662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:11.169987 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:11.170076 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:11.186770 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:11.186800 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:11.248524 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:11.240323    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.241076    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242336    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242961    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.244559    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:11.240323    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.241076    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242336    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242961    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.244559    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:11.248546 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:11.248559 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:11.276178 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:11.276215 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:13.807549 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:13.818390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:13.818463 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:13.844700 2149446 cri.go:96] found id: ""
	I1222 01:51:13.844722 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.844730 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:13.844736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:13.844795 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:13.875626 2149446 cri.go:96] found id: ""
	I1222 01:51:13.875648 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.875656 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:13.875662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:13.875727 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:13.904646 2149446 cri.go:96] found id: ""
	I1222 01:51:13.904667 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.904676 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:13.904682 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:13.904740 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:13.933165 2149446 cri.go:96] found id: ""
	I1222 01:51:13.933241 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.933263 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:13.933282 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:13.933402 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:13.961468 2149446 cri.go:96] found id: ""
	I1222 01:51:13.961492 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.961501 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:13.961507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:13.961576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:13.985666 2149446 cri.go:96] found id: ""
	I1222 01:51:13.985689 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.985697 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:13.985704 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:13.985760 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:14.018377 2149446 cri.go:96] found id: ""
	I1222 01:51:14.018402 2149446 logs.go:282] 0 containers: []
	W1222 01:51:14.018411 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:14.018417 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:14.018484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:14.044854 2149446 cri.go:96] found id: ""
	I1222 01:51:14.044879 2149446 logs.go:282] 0 containers: []
	W1222 01:51:14.044888 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:14.044897 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:14.044933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:14.110126 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:14.110206 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:14.128030 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:14.128108 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:14.196957 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:14.188697    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.189523    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191343    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191689    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.193153    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:14.188697    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.189523    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191343    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191689    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.193153    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:14.197029 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:14.197056 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:14.226166 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:14.226203 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:16.756369 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:16.767083 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:16.767155 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:16.793678 2149446 cri.go:96] found id: ""
	I1222 01:51:16.793704 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.793712 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:16.793719 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:16.793779 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:16.819158 2149446 cri.go:96] found id: ""
	I1222 01:51:16.819183 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.819192 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:16.819198 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:16.819260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:16.848081 2149446 cri.go:96] found id: ""
	I1222 01:51:16.848103 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.848112 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:16.848119 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:16.848178 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:16.889132 2149446 cri.go:96] found id: ""
	I1222 01:51:16.889210 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.889233 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:16.889253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:16.889357 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:16.922581 2149446 cri.go:96] found id: ""
	I1222 01:51:16.922656 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.922677 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:16.922697 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:16.922788 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:16.947759 2149446 cri.go:96] found id: ""
	I1222 01:51:16.947833 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.947855 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:16.947876 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:16.947959 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:16.973072 2149446 cri.go:96] found id: ""
	I1222 01:51:16.973106 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.973115 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:16.973138 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:16.973219 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:17.003269 2149446 cri.go:96] found id: ""
	I1222 01:51:17.003303 2149446 logs.go:282] 0 containers: []
	W1222 01:51:17.003328 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:17.003339 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:17.003371 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:17.070360 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:17.070398 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:17.087501 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:17.087530 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:17.166797 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:17.158419    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.159432    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161029    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161376    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.162807    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:17.158419    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.159432    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161029    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161376    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.162807    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:17.166818 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:17.166830 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:17.196986 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:17.197018 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:19.726818 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:19.737430 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:19.737505 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:19.761268 2149446 cri.go:96] found id: ""
	I1222 01:51:19.761291 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.761300 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:19.761306 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:19.761388 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:19.787563 2149446 cri.go:96] found id: ""
	I1222 01:51:19.787587 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.787596 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:19.787602 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:19.787665 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:19.816441 2149446 cri.go:96] found id: ""
	I1222 01:51:19.816468 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.816477 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:19.816483 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:19.816540 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:19.851780 2149446 cri.go:96] found id: ""
	I1222 01:51:19.851812 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.851821 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:19.851827 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:19.851895 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:19.888751 2149446 cri.go:96] found id: ""
	I1222 01:51:19.888785 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.888794 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:19.888800 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:19.888866 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:19.923050 2149446 cri.go:96] found id: ""
	I1222 01:51:19.923078 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.923087 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:19.923093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:19.923151 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:19.948402 2149446 cri.go:96] found id: ""
	I1222 01:51:19.948435 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.948444 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:19.948451 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:19.948520 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:19.973868 2149446 cri.go:96] found id: ""
	I1222 01:51:19.973894 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.973903 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:19.973912 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:19.973942 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:20.041369 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:20.032095    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.032873    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.034426    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.035065    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.036800    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:20.032095    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.032873    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.034426    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.035065    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.036800    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:20.041391 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:20.041404 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:20.070678 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:20.070728 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:20.112858 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:20.112886 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:20.189261 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:20.189300 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:22.708863 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:22.719695 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:22.719790 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:22.746691 2149446 cri.go:96] found id: ""
	I1222 01:51:22.746715 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.746724 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:22.746731 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:22.746790 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:22.775484 2149446 cri.go:96] found id: ""
	I1222 01:51:22.775511 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.775521 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:22.775527 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:22.775592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:22.801739 2149446 cri.go:96] found id: ""
	I1222 01:51:22.801771 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.801781 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:22.801787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:22.801855 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:22.826408 2149446 cri.go:96] found id: ""
	I1222 01:51:22.826431 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.826439 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:22.826446 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:22.826510 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:22.857976 2149446 cri.go:96] found id: ""
	I1222 01:51:22.858000 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.858009 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:22.858016 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:22.858076 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:22.895329 2149446 cri.go:96] found id: ""
	I1222 01:51:22.895357 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.895365 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:22.895372 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:22.895431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:22.924900 2149446 cri.go:96] found id: ""
	I1222 01:51:22.924923 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.924932 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:22.924938 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:22.924997 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:22.949093 2149446 cri.go:96] found id: ""
	I1222 01:51:22.949166 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.949177 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:22.949214 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:22.949228 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:22.978825 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:22.978862 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:23.008996 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:23.009026 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:23.075469 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:23.075506 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:23.092176 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:23.092253 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:23.168462 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:23.160726    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.161296    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.162792    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.163269    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.164704    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:23.160726    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.161296    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.162792    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.163269    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.164704    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:25.669110 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:25.679471 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:25.679544 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:25.707773 2149446 cri.go:96] found id: ""
	I1222 01:51:25.707798 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.707807 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:25.707814 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:25.707872 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:25.734428 2149446 cri.go:96] found id: ""
	I1222 01:51:25.734452 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.734460 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:25.734480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:25.734536 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:25.764253 2149446 cri.go:96] found id: ""
	I1222 01:51:25.764349 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.764381 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:25.764416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:25.764515 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:25.791966 2149446 cri.go:96] found id: ""
	I1222 01:51:25.792042 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.792076 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:25.792100 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:25.792193 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:25.820057 2149446 cri.go:96] found id: ""
	I1222 01:51:25.820132 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.820167 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:25.820191 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:25.820279 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:25.854439 2149446 cri.go:96] found id: ""
	I1222 01:51:25.854514 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.854537 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:25.854556 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:25.854661 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:25.887808 2149446 cri.go:96] found id: ""
	I1222 01:51:25.887884 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.887905 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:25.887919 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:25.887996 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:25.922356 2149446 cri.go:96] found id: ""
	I1222 01:51:25.922380 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.922389 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:25.922398 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:25.922443 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:25.987720 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:25.987760 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:26.005813 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:26.005849 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:26.075026 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:26.066017    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.066714    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.068425    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.069221    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.070982    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:26.066017    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.066714    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.068425    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.069221    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.070982    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:26.075049 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:26.075061 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:26.105526 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:26.105564 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:28.644504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:28.655061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:28.655129 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:28.679454 2149446 cri.go:96] found id: ""
	I1222 01:51:28.679477 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.679486 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:28.679492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:28.679551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:28.708993 2149446 cri.go:96] found id: ""
	I1222 01:51:28.709018 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.709027 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:28.709033 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:28.709095 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:28.733612 2149446 cri.go:96] found id: ""
	I1222 01:51:28.733637 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.733646 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:28.733652 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:28.733711 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:28.762256 2149446 cri.go:96] found id: ""
	I1222 01:51:28.762285 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.762294 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:28.762300 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:28.762359 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:28.788169 2149446 cri.go:96] found id: ""
	I1222 01:51:28.788193 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.788201 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:28.788207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:28.788266 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:28.816745 2149446 cri.go:96] found id: ""
	I1222 01:51:28.816771 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.816780 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:28.816787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:28.816852 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:28.845364 2149446 cri.go:96] found id: ""
	I1222 01:51:28.845392 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.845401 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:28.845407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:28.845469 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:28.881787 2149446 cri.go:96] found id: ""
	I1222 01:51:28.881818 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.881827 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:28.881836 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:28.881855 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:28.914780 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:28.914845 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:28.954392 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:28.954419 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:29.021327 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:29.021365 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:29.037773 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:29.037801 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:29.100009 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:29.091767    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.092439    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.093858    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.094561    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.096086    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:29.091767    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.092439    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.093858    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.094561    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.096086    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:31.600320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:31.612534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:31.612605 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:31.639353 2149446 cri.go:96] found id: ""
	I1222 01:51:31.639376 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.639384 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:31.639391 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:31.639449 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:31.664903 2149446 cri.go:96] found id: ""
	I1222 01:51:31.664927 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.664936 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:31.664942 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:31.665003 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:31.694601 2149446 cri.go:96] found id: ""
	I1222 01:51:31.694626 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.694635 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:31.694642 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:31.694700 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:31.719433 2149446 cri.go:96] found id: ""
	I1222 01:51:31.719476 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.719485 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:31.719493 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:31.719573 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:31.744350 2149446 cri.go:96] found id: ""
	I1222 01:51:31.744379 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.744388 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:31.744395 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:31.744457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:31.768144 2149446 cri.go:96] found id: ""
	I1222 01:51:31.768171 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.768180 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:31.768187 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:31.768246 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:31.793170 2149446 cri.go:96] found id: ""
	I1222 01:51:31.793195 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.793204 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:31.793211 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:31.793269 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:31.817469 2149446 cri.go:96] found id: ""
	I1222 01:51:31.817495 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.817510 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:31.817520 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:31.817531 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:31.833979 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:31.834009 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:31.921313 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:31.912648    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.913191    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.914772    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.915315    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.916842    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:31.912648    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.913191    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.914772    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.915315    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.916842    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:31.921411 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:31.921424 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:31.950732 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:31.950766 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:31.979280 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:31.979390 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:34.551921 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:34.562702 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:34.562773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:34.587409 2149446 cri.go:96] found id: ""
	I1222 01:51:34.587437 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.587446 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:34.587453 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:34.587512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:34.615168 2149446 cri.go:96] found id: ""
	I1222 01:51:34.615194 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.615203 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:34.615210 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:34.615270 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:34.640950 2149446 cri.go:96] found id: ""
	I1222 01:51:34.640975 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.640983 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:34.640990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:34.641070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:34.666882 2149446 cri.go:96] found id: ""
	I1222 01:51:34.666908 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.666917 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:34.666924 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:34.667003 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:34.691815 2149446 cri.go:96] found id: ""
	I1222 01:51:34.691841 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.691850 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:34.691856 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:34.691935 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:34.716718 2149446 cri.go:96] found id: ""
	I1222 01:51:34.716744 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.716753 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:34.716760 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:34.716819 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:34.741533 2149446 cri.go:96] found id: ""
	I1222 01:51:34.741557 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.741565 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:34.741572 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:34.741638 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:34.766353 2149446 cri.go:96] found id: ""
	I1222 01:51:34.766378 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.766387 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:34.766398 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:34.766410 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:34.831354 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:34.831392 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:34.849488 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:34.849517 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:34.936942 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:34.926834    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.927637    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929290    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929888    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.931562    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:34.926834    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.927637    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929290    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929888    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.931562    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:34.936964 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:34.936976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:34.966069 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:34.966106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:37.498453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:37.509387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:37.509460 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:37.536333 2149446 cri.go:96] found id: ""
	I1222 01:51:37.536418 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.536440 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:37.536458 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:37.536564 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:37.561738 2149446 cri.go:96] found id: ""
	I1222 01:51:37.561761 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.561770 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:37.561777 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:37.561835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:37.589068 2149446 cri.go:96] found id: ""
	I1222 01:51:37.589103 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.589112 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:37.589118 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:37.589187 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:37.613720 2149446 cri.go:96] found id: ""
	I1222 01:51:37.613745 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.613753 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:37.613760 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:37.613828 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:37.638911 2149446 cri.go:96] found id: ""
	I1222 01:51:37.638936 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.638944 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:37.638950 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:37.639030 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:37.664184 2149446 cri.go:96] found id: ""
	I1222 01:51:37.664209 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.664218 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:37.664224 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:37.664336 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:37.689816 2149446 cri.go:96] found id: ""
	I1222 01:51:37.689840 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.689849 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:37.689855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:37.689915 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:37.714864 2149446 cri.go:96] found id: ""
	I1222 01:51:37.714888 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.714896 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:37.714905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:37.714919 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:37.731696 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:37.731722 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:37.802911 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:37.789256    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.794884    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.795605    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.797500    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.798024    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:37.789256    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.794884    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.795605    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.797500    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.798024    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:37.802930 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:37.802944 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:37.831970 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:37.832005 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:37.864010 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:37.864044 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:40.445669 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:40.458164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:40.458231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:40.484132 2149446 cri.go:96] found id: ""
	I1222 01:51:40.484159 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.484167 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:40.484173 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:40.484231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:40.510161 2149446 cri.go:96] found id: ""
	I1222 01:51:40.510187 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.510195 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:40.510201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:40.510263 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:40.537903 2149446 cri.go:96] found id: ""
	I1222 01:51:40.537929 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.537938 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:40.537944 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:40.538008 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:40.569673 2149446 cri.go:96] found id: ""
	I1222 01:51:40.569699 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.569707 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:40.569714 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:40.569773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:40.594097 2149446 cri.go:96] found id: ""
	I1222 01:51:40.594122 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.594130 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:40.594137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:40.594199 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:40.619442 2149446 cri.go:96] found id: ""
	I1222 01:51:40.619468 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.619477 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:40.619486 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:40.619545 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:40.644458 2149446 cri.go:96] found id: ""
	I1222 01:51:40.644485 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.644494 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:40.644500 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:40.644559 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:40.669598 2149446 cri.go:96] found id: ""
	I1222 01:51:40.669625 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.669634 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:40.669642 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:40.669655 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:40.686418 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:40.686488 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:40.747161 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:40.739409    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.739815    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.740969    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.741303    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.742747    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:40.739409    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.739815    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.740969    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.741303    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.742747    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:40.747183 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:40.747195 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:40.774671 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:40.774703 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:40.806536 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:40.806568 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:43.371926 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:43.382998 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:43.383098 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:43.418960 2149446 cri.go:96] found id: ""
	I1222 01:51:43.419037 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.419060 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:43.419081 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:43.419145 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:43.447476 2149446 cri.go:96] found id: ""
	I1222 01:51:43.447502 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.447511 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:43.447517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:43.447586 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:43.474323 2149446 cri.go:96] found id: ""
	I1222 01:51:43.474357 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.474367 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:43.474374 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:43.474448 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:43.500245 2149446 cri.go:96] found id: ""
	I1222 01:51:43.500323 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.500371 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:43.500390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:43.500484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:43.526338 2149446 cri.go:96] found id: ""
	I1222 01:51:43.526361 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.526369 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:43.526375 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:43.526434 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:43.551187 2149446 cri.go:96] found id: ""
	I1222 01:51:43.551211 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.551220 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:43.551227 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:43.551288 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:43.576838 2149446 cri.go:96] found id: ""
	I1222 01:51:43.576861 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.576870 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:43.576877 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:43.576939 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:43.603211 2149446 cri.go:96] found id: ""
	I1222 01:51:43.603283 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.603303 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:43.603326 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:43.603363 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:43.671168 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:43.671202 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:43.688181 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:43.688252 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:43.752146 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:43.744254    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.744955    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746431    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746780    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.748199    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:43.744254    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.744955    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746431    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746780    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.748199    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:43.752167 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:43.752178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:43.780617 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:43.780654 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:46.308524 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:46.318956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:46.319027 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:46.346481 2149446 cri.go:96] found id: ""
	I1222 01:51:46.346508 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.346517 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:46.346523 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:46.346582 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:46.375577 2149446 cri.go:96] found id: ""
	I1222 01:51:46.375603 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.375612 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:46.375618 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:46.375681 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:46.410693 2149446 cri.go:96] found id: ""
	I1222 01:51:46.410721 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.410730 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:46.410736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:46.410805 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:46.437367 2149446 cri.go:96] found id: ""
	I1222 01:51:46.437391 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.437400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:46.437407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:46.437466 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:46.463337 2149446 cri.go:96] found id: ""
	I1222 01:51:46.463364 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.463373 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:46.463379 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:46.463436 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:46.487509 2149446 cri.go:96] found id: ""
	I1222 01:51:46.487532 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.487540 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:46.487547 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:46.487609 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:46.513965 2149446 cri.go:96] found id: ""
	I1222 01:51:46.513989 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.513998 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:46.514037 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:46.514111 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:46.539974 2149446 cri.go:96] found id: ""
	I1222 01:51:46.539999 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.540008 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:46.540017 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:46.540032 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:46.604304 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:46.604345 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:46.620864 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:46.620894 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:46.682614 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:46.674017    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.674824    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676316    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676738    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.678200    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:46.674017    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.674824    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676316    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676738    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.678200    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:46.682639 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:46.682652 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:46.711167 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:46.711201 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:49.246342 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:49.256853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:49.256925 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:49.285523 2149446 cri.go:96] found id: ""
	I1222 01:51:49.285548 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.285557 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:49.285563 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:49.285621 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:49.314893 2149446 cri.go:96] found id: ""
	I1222 01:51:49.314915 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.314923 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:49.314930 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:49.314988 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:49.346252 2149446 cri.go:96] found id: ""
	I1222 01:51:49.346281 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.346290 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:49.346297 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:49.346353 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:49.380477 2149446 cri.go:96] found id: ""
	I1222 01:51:49.380505 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.380514 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:49.380521 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:49.380590 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:49.414563 2149446 cri.go:96] found id: ""
	I1222 01:51:49.414586 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.414595 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:49.414601 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:49.414660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:49.439047 2149446 cri.go:96] found id: ""
	I1222 01:51:49.439072 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.439081 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:49.439088 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:49.439171 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:49.465565 2149446 cri.go:96] found id: ""
	I1222 01:51:49.465592 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.465601 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:49.465607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:49.465685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:49.489816 2149446 cri.go:96] found id: ""
	I1222 01:51:49.489838 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.489847 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:49.489856 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:49.489871 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:49.555234 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:49.555269 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:49.573085 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:49.573124 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:49.640369 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:49.632082    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.632738    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634246    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634633    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.636132    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:49.632082    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.632738    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634246    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634633    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.636132    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:49.640390 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:49.640402 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:49.668820 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:49.668855 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:52.197481 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:52.208164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:52.208283 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:52.240218 2149446 cri.go:96] found id: ""
	I1222 01:51:52.240244 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.240252 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:52.240258 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:52.240325 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:52.267476 2149446 cri.go:96] found id: ""
	I1222 01:51:52.267503 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.267512 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:52.267518 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:52.267588 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:52.292286 2149446 cri.go:96] found id: ""
	I1222 01:51:52.292310 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.292319 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:52.292332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:52.292395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:52.320140 2149446 cri.go:96] found id: ""
	I1222 01:51:52.320164 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.320173 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:52.320179 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:52.320256 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:52.353854 2149446 cri.go:96] found id: ""
	I1222 01:51:52.353888 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.353897 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:52.353903 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:52.354001 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:52.383818 2149446 cri.go:96] found id: ""
	I1222 01:51:52.383851 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.383860 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:52.383867 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:52.383958 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:52.418591 2149446 cri.go:96] found id: ""
	I1222 01:51:52.418616 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.418624 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:52.418631 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:52.418714 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:52.449289 2149446 cri.go:96] found id: ""
	I1222 01:51:52.449383 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.449407 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:52.449430 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:52.449461 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:52.525065 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:52.525107 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:52.543910 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:52.543942 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:52.607708 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:52.599699    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.600218    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.601875    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.602334    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.603935    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:52.599699    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.600218    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.601875    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.602334    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.603935    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:52.607772 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:52.607790 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:52.636841 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:52.636879 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:55.167236 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:55.178565 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:55.178664 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:55.207461 2149446 cri.go:96] found id: ""
	I1222 01:51:55.207486 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.207501 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:55.207507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:55.207570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:55.232542 2149446 cri.go:96] found id: ""
	I1222 01:51:55.232615 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.232638 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:55.232657 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:55.232741 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:55.258972 2149446 cri.go:96] found id: ""
	I1222 01:51:55.258999 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.259008 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:55.259016 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:55.259078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:55.285371 2149446 cri.go:96] found id: ""
	I1222 01:51:55.285397 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.285406 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:55.285413 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:55.285484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:55.313304 2149446 cri.go:96] found id: ""
	I1222 01:51:55.313354 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.313363 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:55.313370 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:55.313439 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:55.347910 2149446 cri.go:96] found id: ""
	I1222 01:51:55.347940 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.347948 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:55.347955 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:55.348028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:55.381847 2149446 cri.go:96] found id: ""
	I1222 01:51:55.381869 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.381877 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:55.381884 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:55.381948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:55.418137 2149446 cri.go:96] found id: ""
	I1222 01:51:55.418161 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.418169 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:55.418179 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:55.418192 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:55.484714 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:55.484752 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:55.501483 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:55.501575 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:55.578456 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:55.569912    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.570649    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572190    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572838    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.574290    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:55.569912    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.570649    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572190    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572838    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.574290    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:55.578476 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:55.578487 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:55.606594 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:55.606627 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:58.138273 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:58.149164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:58.149233 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:58.176492 2149446 cri.go:96] found id: ""
	I1222 01:51:58.176517 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.176525 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:58.176532 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:58.176592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:58.200700 2149446 cri.go:96] found id: ""
	I1222 01:51:58.200724 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.200733 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:58.200740 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:58.200799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:58.227476 2149446 cri.go:96] found id: ""
	I1222 01:51:58.227501 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.227511 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:58.227517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:58.227582 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:58.252693 2149446 cri.go:96] found id: ""
	I1222 01:51:58.252720 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.252728 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:58.252736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:58.252801 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:58.277812 2149446 cri.go:96] found id: ""
	I1222 01:51:58.277835 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.277843 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:58.277850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:58.277910 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:58.303365 2149446 cri.go:96] found id: ""
	I1222 01:51:58.303400 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.303409 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:58.303432 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:58.303509 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:58.328480 2149446 cri.go:96] found id: ""
	I1222 01:51:58.328504 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.328513 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:58.328520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:58.328578 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:58.359399 2149446 cri.go:96] found id: ""
	I1222 01:51:58.359426 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.359436 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:58.359445 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:58.359456 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:58.391493 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:58.391529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:58.429463 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:58.429487 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:58.494632 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:58.494667 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:58.511259 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:58.511294 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:58.572875 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:58.564819    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.565262    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566536    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566986    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.568425    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:58.564819    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.565262    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566536    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566986    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.568425    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:01.073148 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:01.084069 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:01.084185 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:01.116512 2149446 cri.go:96] found id: ""
	I1222 01:52:01.116590 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.116626 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:01.116635 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:01.116729 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:01.155908 2149446 cri.go:96] found id: ""
	I1222 01:52:01.155940 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.155948 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:01.155956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:01.156039 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:01.190906 2149446 cri.go:96] found id: ""
	I1222 01:52:01.190992 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.191015 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:01.191036 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:01.191159 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:01.228209 2149446 cri.go:96] found id: ""
	I1222 01:52:01.228284 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.228300 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:01.228311 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:01.228390 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:01.258097 2149446 cri.go:96] found id: ""
	I1222 01:52:01.258122 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.258131 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:01.258138 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:01.258257 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:01.288999 2149446 cri.go:96] found id: ""
	I1222 01:52:01.289026 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.289035 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:01.289042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:01.289107 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:01.318779 2149446 cri.go:96] found id: ""
	I1222 01:52:01.318806 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.318817 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:01.318825 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:01.318896 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:01.354526 2149446 cri.go:96] found id: ""
	I1222 01:52:01.354551 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.354560 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:01.354568 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:01.354580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:01.377079 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:01.377178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:01.467471 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:01.456482    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.457416    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.459443    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.460466    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.462034    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:01.456482    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.457416    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.459443    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.460466    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.462034    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:01.467498 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:01.467512 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:01.497872 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:01.497924 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:01.538392 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:01.538422 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:04.105458 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:04.117694 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:04.117772 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:04.145575 2149446 cri.go:96] found id: ""
	I1222 01:52:04.145610 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.145618 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:04.145624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:04.145682 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:04.175381 2149446 cri.go:96] found id: ""
	I1222 01:52:04.175407 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.175416 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:04.175422 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:04.175501 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:04.201486 2149446 cri.go:96] found id: ""
	I1222 01:52:04.201511 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.201520 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:04.201526 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:04.201627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:04.226750 2149446 cri.go:96] found id: ""
	I1222 01:52:04.226776 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.226785 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:04.226791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:04.226868 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:04.254116 2149446 cri.go:96] found id: ""
	I1222 01:52:04.254180 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.254194 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:04.254201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:04.254260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:04.279876 2149446 cri.go:96] found id: ""
	I1222 01:52:04.279900 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.279909 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:04.279916 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:04.279979 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:04.305403 2149446 cri.go:96] found id: ""
	I1222 01:52:04.305425 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.305433 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:04.305439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:04.305500 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:04.334607 2149446 cri.go:96] found id: ""
	I1222 01:52:04.334633 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.334641 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:04.334650 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:04.334682 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:04.373276 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:04.373306 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:04.446848 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:04.446882 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:04.464981 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:04.465008 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:04.530912 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:04.522524    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.523282    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.524828    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.525152    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.526604    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:04.522524    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.523282    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.524828    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.525152    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.526604    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:04.530931 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:04.530943 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:07.063151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:07.073864 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:07.073937 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:07.099937 2149446 cri.go:96] found id: ""
	I1222 01:52:07.099966 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.099975 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:07.099981 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:07.100043 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:07.127907 2149446 cri.go:96] found id: ""
	I1222 01:52:07.127934 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.127943 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:07.127949 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:07.128009 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:07.162965 2149446 cri.go:96] found id: ""
	I1222 01:52:07.162991 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.162999 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:07.163006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:07.163067 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:07.192890 2149446 cri.go:96] found id: ""
	I1222 01:52:07.192916 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.192925 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:07.192932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:07.192994 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:07.221463 2149446 cri.go:96] found id: ""
	I1222 01:52:07.221495 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.221504 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:07.221512 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:07.221572 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:07.247489 2149446 cri.go:96] found id: ""
	I1222 01:52:07.247517 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.247527 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:07.247535 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:07.247600 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:07.272728 2149446 cri.go:96] found id: ""
	I1222 01:52:07.272755 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.272764 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:07.272771 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:07.272849 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:07.297364 2149446 cri.go:96] found id: ""
	I1222 01:52:07.297389 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.297398 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:07.297407 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:07.297418 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:07.325214 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:07.325251 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:07.360148 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:07.360177 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:07.432261 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:07.432295 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:07.450157 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:07.450186 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:07.516409 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:07.507778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.508648    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510237    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.512438    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:07.507778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.508648    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510237    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.512438    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:10.017883 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:10.030135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:10.030223 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:10.060521 2149446 cri.go:96] found id: ""
	I1222 01:52:10.060547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.060556 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:10.060563 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:10.060627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:10.088074 2149446 cri.go:96] found id: ""
	I1222 01:52:10.088112 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.088124 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:10.088133 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:10.088200 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:10.130279 2149446 cri.go:96] found id: ""
	I1222 01:52:10.130306 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.130315 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:10.130321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:10.130387 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:10.163042 2149446 cri.go:96] found id: ""
	I1222 01:52:10.163067 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.163078 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:10.163085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:10.163147 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:10.190658 2149446 cri.go:96] found id: ""
	I1222 01:52:10.190684 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.190693 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:10.190700 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:10.190763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:10.218153 2149446 cri.go:96] found id: ""
	I1222 01:52:10.218181 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.218189 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:10.218197 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:10.218261 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:10.244623 2149446 cri.go:96] found id: ""
	I1222 01:52:10.244647 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.244656 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:10.244662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:10.244755 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:10.270908 2149446 cri.go:96] found id: ""
	I1222 01:52:10.270931 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.270939 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:10.270948 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:10.270959 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:10.299810 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:10.299846 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:10.331687 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:10.331716 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:10.407109 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:10.407148 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:10.425255 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:10.425286 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:10.506189 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:10.497058    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.497925    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.499600    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.500185    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.501961    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:10.497058    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.497925    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.499600    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.500185    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.501961    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:13.007836 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:13.019282 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:13.019388 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:13.045528 2149446 cri.go:96] found id: ""
	I1222 01:52:13.045553 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.045562 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:13.045568 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:13.045631 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:13.071460 2149446 cri.go:96] found id: ""
	I1222 01:52:13.071528 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.071542 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:13.071549 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:13.071623 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:13.103717 2149446 cri.go:96] found id: ""
	I1222 01:52:13.103749 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.103758 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:13.103765 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:13.103835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:13.133730 2149446 cri.go:96] found id: ""
	I1222 01:52:13.133757 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.133772 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:13.133787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:13.133846 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:13.170719 2149446 cri.go:96] found id: ""
	I1222 01:52:13.170799 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.170841 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:13.170860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:13.170948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:13.197725 2149446 cri.go:96] found id: ""
	I1222 01:52:13.197801 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.197823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:13.197842 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:13.197926 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:13.224209 2149446 cri.go:96] found id: ""
	I1222 01:52:13.224404 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.224492 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:13.224517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:13.224639 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:13.249315 2149446 cri.go:96] found id: ""
	I1222 01:52:13.249353 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.249362 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:13.249371 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:13.249382 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:13.280377 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:13.280404 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:13.346072 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:13.346106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:13.364038 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:13.364070 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:13.433131 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:13.424970    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.425643    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427164    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427697    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.429162    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:13.424970    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.425643    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427164    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427697    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.429162    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:13.433163 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:13.433175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:15.961525 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:15.972093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:15.972167 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:15.996879 2149446 cri.go:96] found id: ""
	I1222 01:52:15.996913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:15.996922 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:15.996929 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:15.996994 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:16.024677 2149446 cri.go:96] found id: ""
	I1222 01:52:16.024700 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.024709 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:16.024715 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:16.024777 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:16.050558 2149446 cri.go:96] found id: ""
	I1222 01:52:16.050590 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.050599 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:16.050606 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:16.050666 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:16.076374 2149446 cri.go:96] found id: ""
	I1222 01:52:16.076400 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.076409 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:16.076416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:16.076476 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:16.107298 2149446 cri.go:96] found id: ""
	I1222 01:52:16.107323 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.107331 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:16.107339 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:16.107402 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:16.139868 2149446 cri.go:96] found id: ""
	I1222 01:52:16.139895 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.139904 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:16.139911 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:16.139972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:16.170608 2149446 cri.go:96] found id: ""
	I1222 01:52:16.170631 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.170640 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:16.170646 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:16.170708 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:16.195886 2149446 cri.go:96] found id: ""
	I1222 01:52:16.195913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.195932 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:16.195941 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:16.195954 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:16.262211 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:16.262249 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:16.280423 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:16.280452 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:16.356185 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:16.346872    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.347685    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.349384    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.350021    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.351608    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:16.346872    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.347685    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.349384    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.350021    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.351608    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:16.356258 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:16.356286 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:16.391461 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:16.391538 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:18.932633 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:18.944359 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:18.944431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:18.971356 2149446 cri.go:96] found id: ""
	I1222 01:52:18.971379 2149446 logs.go:282] 0 containers: []
	W1222 01:52:18.971388 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:18.971394 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:18.971457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:18.998044 2149446 cri.go:96] found id: ""
	I1222 01:52:18.998070 2149446 logs.go:282] 0 containers: []
	W1222 01:52:18.998078 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:18.998085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:18.998149 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:19.028529 2149446 cri.go:96] found id: ""
	I1222 01:52:19.028553 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.028566 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:19.028573 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:19.028633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:19.052808 2149446 cri.go:96] found id: ""
	I1222 01:52:19.052833 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.052843 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:19.052850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:19.052910 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:19.077981 2149446 cri.go:96] found id: ""
	I1222 01:52:19.078004 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.078012 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:19.078019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:19.078080 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:19.105393 2149446 cri.go:96] found id: ""
	I1222 01:52:19.105422 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.105433 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:19.105439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:19.105498 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:19.146968 2149446 cri.go:96] found id: ""
	I1222 01:52:19.146995 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.147004 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:19.147010 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:19.147075 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:19.176941 2149446 cri.go:96] found id: ""
	I1222 01:52:19.176965 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.176974 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:19.176983 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:19.176995 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:19.246864 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:19.246904 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:19.266042 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:19.266070 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:19.330582 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:19.322391    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.323105    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.324610    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.325067    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.326613    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:19.322391    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.323105    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.324610    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.325067    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.326613    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:19.330602 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:19.330615 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:19.366422 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:19.366457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:21.896072 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:21.906705 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:21.906774 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:21.931888 2149446 cri.go:96] found id: ""
	I1222 01:52:21.931913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.931921 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:21.931928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:21.931988 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:21.959852 2149446 cri.go:96] found id: ""
	I1222 01:52:21.959925 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.959959 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:21.959984 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:21.960070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:21.984377 2149446 cri.go:96] found id: ""
	I1222 01:52:21.984402 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.984410 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:21.984423 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:21.984485 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:22.015185 2149446 cri.go:96] found id: ""
	I1222 01:52:22.015212 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.015221 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:22.015227 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:22.015290 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:22.041402 2149446 cri.go:96] found id: ""
	I1222 01:52:22.041427 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.041436 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:22.041442 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:22.041505 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:22.069753 2149446 cri.go:96] found id: ""
	I1222 01:52:22.069780 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.069790 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:22.069797 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:22.069863 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:22.100905 2149446 cri.go:96] found id: ""
	I1222 01:52:22.100941 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.100950 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:22.100956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:22.101028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:22.133734 2149446 cri.go:96] found id: ""
	I1222 01:52:22.133770 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.133778 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:22.133787 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:22.133799 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:22.219653 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:22.219690 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:22.236668 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:22.236699 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:22.306717 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:22.298969    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.299431    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301029    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301450    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.302886    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:22.298969    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.299431    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301029    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301450    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.302886    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:22.306779 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:22.306799 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:22.334960 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:22.334996 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:24.888617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:24.899592 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:24.899667 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:24.924136 2149446 cri.go:96] found id: ""
	I1222 01:52:24.924161 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.924171 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:24.924177 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:24.924237 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:24.949757 2149446 cri.go:96] found id: ""
	I1222 01:52:24.949782 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.949791 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:24.949797 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:24.949862 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:24.975441 2149446 cri.go:96] found id: ""
	I1222 01:52:24.975468 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.975476 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:24.975482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:24.975544 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:24.999474 2149446 cri.go:96] found id: ""
	I1222 01:52:24.999500 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.999509 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:24.999517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:24.999624 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:25.034630 2149446 cri.go:96] found id: ""
	I1222 01:52:25.034655 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.034663 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:25.034670 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:25.034737 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:25.060352 2149446 cri.go:96] found id: ""
	I1222 01:52:25.060375 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.060385 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:25.060392 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:25.060516 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:25.088743 2149446 cri.go:96] found id: ""
	I1222 01:52:25.088768 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.088777 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:25.088784 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:25.088845 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:25.116748 2149446 cri.go:96] found id: ""
	I1222 01:52:25.116781 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.116790 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:25.116800 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:25.116813 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:25.198386 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:25.198430 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:25.216584 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:25.216619 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:25.285946 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:25.278237    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.278629    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280115    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280453    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.281691    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:25.278237    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.278629    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280115    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280453    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.281691    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:25.285971 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:25.285986 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:25.313622 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:25.313652 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:27.852014 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:27.862934 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:27.863049 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:27.888699 2149446 cri.go:96] found id: ""
	I1222 01:52:27.888735 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.888744 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:27.888751 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:27.888822 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:27.914148 2149446 cri.go:96] found id: ""
	I1222 01:52:27.914223 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.914246 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:27.914259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:27.914337 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:27.939481 2149446 cri.go:96] found id: ""
	I1222 01:52:27.939560 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.939581 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:27.939595 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:27.939668 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:27.967633 2149446 cri.go:96] found id: ""
	I1222 01:52:27.967662 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.967671 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:27.967678 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:27.967785 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:27.991808 2149446 cri.go:96] found id: ""
	I1222 01:52:27.991889 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.991911 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:27.991932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:27.992037 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:28.023583 2149446 cri.go:96] found id: ""
	I1222 01:52:28.023622 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.023631 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:28.023638 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:28.023709 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:28.052545 2149446 cri.go:96] found id: ""
	I1222 01:52:28.052625 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.052648 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:28.052672 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:28.052763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:28.078522 2149446 cri.go:96] found id: ""
	I1222 01:52:28.078550 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.078558 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:28.078567 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:28.078580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:28.147757 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:28.147792 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:28.165530 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:28.165560 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:28.232073 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:28.223681    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.224341    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.225912    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.226397    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.228007    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:28.223681    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.224341    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.225912    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.226397    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.228007    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:28.232094 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:28.232107 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:28.260533 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:28.260569 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:30.796554 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:30.807742 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:30.807813 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:30.835334 2149446 cri.go:96] found id: ""
	I1222 01:52:30.835361 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.835370 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:30.835377 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:30.835467 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:30.861061 2149446 cri.go:96] found id: ""
	I1222 01:52:30.861101 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.861113 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:30.861121 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:30.861195 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:30.888925 2149446 cri.go:96] found id: ""
	I1222 01:52:30.888999 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.889021 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:30.889042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:30.889128 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:30.916374 2149446 cri.go:96] found id: ""
	I1222 01:52:30.916448 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.916470 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:30.916490 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:30.916583 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:30.941908 2149446 cri.go:96] found id: ""
	I1222 01:52:30.941932 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.941940 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:30.941947 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:30.942017 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:30.968265 2149446 cri.go:96] found id: ""
	I1222 01:52:30.968290 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.968298 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:30.968305 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:30.968372 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:30.995148 2149446 cri.go:96] found id: ""
	I1222 01:52:30.995174 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.995183 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:30.995189 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:30.995250 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:31.022834 2149446 cri.go:96] found id: ""
	I1222 01:52:31.022861 2149446 logs.go:282] 0 containers: []
	W1222 01:52:31.022870 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:31.022879 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:31.022891 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:31.083576 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:31.075792    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.076576    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.077772    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.078220    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.079697    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:31.075792    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.076576    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.077772    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.078220    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.079697    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:31.083600 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:31.083613 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:31.120484 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:31.120518 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:31.156273 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:31.156303 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:31.226587 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:31.226623 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:33.743333 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:33.753911 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:33.753986 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:33.779167 2149446 cri.go:96] found id: ""
	I1222 01:52:33.779192 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.779201 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:33.779207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:33.779265 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:33.804863 2149446 cri.go:96] found id: ""
	I1222 01:52:33.804892 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.804901 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:33.804913 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:33.804972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:33.830391 2149446 cri.go:96] found id: ""
	I1222 01:52:33.830414 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.830421 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:33.830428 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:33.830488 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:33.856024 2149446 cri.go:96] found id: ""
	I1222 01:52:33.856047 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.856057 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:33.856064 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:33.856121 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:33.885352 2149446 cri.go:96] found id: ""
	I1222 01:52:33.885375 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.885383 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:33.885389 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:33.885454 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:33.910795 2149446 cri.go:96] found id: ""
	I1222 01:52:33.910819 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.910827 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:33.910835 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:33.910922 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:33.937408 2149446 cri.go:96] found id: ""
	I1222 01:52:33.937435 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.937444 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:33.937451 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:33.937513 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:33.964011 2149446 cri.go:96] found id: ""
	I1222 01:52:33.964037 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.964046 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:33.964055 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:33.964069 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:34.030109 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:34.021515    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.022250    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.023865    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.024450    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.026222    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:34.021515    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.022250    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.023865    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.024450    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.026222    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:34.030135 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:34.030147 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:34.058869 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:34.058905 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:34.090878 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:34.090905 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:34.173693 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:34.173734 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:36.691703 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:36.703200 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:36.703287 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:36.729524 2149446 cri.go:96] found id: ""
	I1222 01:52:36.729547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.729556 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:36.729562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:36.729620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:36.755426 2149446 cri.go:96] found id: ""
	I1222 01:52:36.755451 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.755459 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:36.755465 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:36.755522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:36.780359 2149446 cri.go:96] found id: ""
	I1222 01:52:36.780384 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.780393 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:36.780399 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:36.780459 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:36.810342 2149446 cri.go:96] found id: ""
	I1222 01:52:36.810370 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.810380 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:36.810386 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:36.810447 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:36.836550 2149446 cri.go:96] found id: ""
	I1222 01:52:36.836586 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.836597 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:36.836619 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:36.836702 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:36.863450 2149446 cri.go:96] found id: ""
	I1222 01:52:36.863476 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.863485 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:36.863492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:36.863554 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:36.890913 2149446 cri.go:96] found id: ""
	I1222 01:52:36.890938 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.890947 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:36.890953 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:36.891014 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:36.921826 2149446 cri.go:96] found id: ""
	I1222 01:52:36.921849 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.921858 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:36.921867 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:36.921879 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:36.938824 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:36.938854 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:37.002681 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:36.993721    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.994257    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.995809    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.996244    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.997718    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:36.993721    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.994257    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.995809    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.996244    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.997718    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:37.002770 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:37.002797 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:37.038076 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:37.038111 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:37.065632 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:37.065659 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:39.635717 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:39.650457 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:39.650526 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:39.678446 2149446 cri.go:96] found id: ""
	I1222 01:52:39.678470 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.678479 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:39.678485 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:39.678543 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:39.703602 2149446 cri.go:96] found id: ""
	I1222 01:52:39.703627 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.703636 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:39.703643 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:39.703702 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:39.728380 2149446 cri.go:96] found id: ""
	I1222 01:52:39.728406 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.728415 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:39.728421 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:39.728484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:39.753237 2149446 cri.go:96] found id: ""
	I1222 01:52:39.753261 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.753269 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:39.753275 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:39.753373 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:39.778213 2149446 cri.go:96] found id: ""
	I1222 01:52:39.778244 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.778253 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:39.778259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:39.778319 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:39.808901 2149446 cri.go:96] found id: ""
	I1222 01:52:39.808925 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.808933 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:39.808940 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:39.808997 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:39.834452 2149446 cri.go:96] found id: ""
	I1222 01:52:39.834478 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.834488 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:39.834495 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:39.834556 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:39.859475 2149446 cri.go:96] found id: ""
	I1222 01:52:39.859498 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.859507 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:39.859515 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:39.859526 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:39.893788 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:39.893816 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:39.958922 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:39.958959 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:39.975346 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:39.975380 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:40.062822 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:40.052520    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.053798    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.054613    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.056904    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.057401    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:40.052520    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.053798    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.054613    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.056904    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.057401    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:40.062851 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:40.062864 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:42.596761 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:42.611989 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:42.612054 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:42.650231 2149446 cri.go:96] found id: ""
	I1222 01:52:42.650253 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.650261 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:42.650268 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:42.650326 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:42.677520 2149446 cri.go:96] found id: ""
	I1222 01:52:42.677547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.677556 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:42.677562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:42.677620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:42.702450 2149446 cri.go:96] found id: ""
	I1222 01:52:42.702478 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.702487 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:42.702493 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:42.702549 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:42.727875 2149446 cri.go:96] found id: ""
	I1222 01:52:42.727947 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.727975 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:42.727994 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:42.728107 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:42.753077 2149446 cri.go:96] found id: ""
	I1222 01:52:42.753159 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.753187 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:42.753207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:42.753360 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:42.779648 2149446 cri.go:96] found id: ""
	I1222 01:52:42.779671 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.779679 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:42.779686 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:42.779761 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:42.808099 2149446 cri.go:96] found id: ""
	I1222 01:52:42.808172 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.808181 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:42.808188 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:42.808247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:42.834339 2149446 cri.go:96] found id: ""
	I1222 01:52:42.834367 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.834376 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:42.834385 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:42.834399 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:42.899680 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:42.899717 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:42.916710 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:42.916749 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:42.983675 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:42.975196    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.975990    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.977681    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.978270    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.979868    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:42.975196    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.975990    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.977681    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.978270    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.979868    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:42.983698 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:42.983710 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:43.011581 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:43.011616 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:45.543456 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:45.553928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:45.554038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:45.578837 2149446 cri.go:96] found id: ""
	I1222 01:52:45.578860 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.578868 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:45.578874 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:45.578932 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:45.612293 2149446 cri.go:96] found id: ""
	I1222 01:52:45.612315 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.612323 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:45.612340 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:45.612407 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:45.648009 2149446 cri.go:96] found id: ""
	I1222 01:52:45.648043 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.648052 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:45.648059 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:45.648157 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:45.675586 2149446 cri.go:96] found id: ""
	I1222 01:52:45.675609 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.675618 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:45.675624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:45.675686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:45.702507 2149446 cri.go:96] found id: ""
	I1222 01:52:45.702576 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.702597 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:45.702610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:45.702686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:45.729853 2149446 cri.go:96] found id: ""
	I1222 01:52:45.729921 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.729943 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:45.729964 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:45.730047 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:45.760311 2149446 cri.go:96] found id: ""
	I1222 01:52:45.760387 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.760420 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:45.760439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:45.760522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:45.785968 2149446 cri.go:96] found id: ""
	I1222 01:52:45.785995 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.786005 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:45.786014 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:45.786027 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:45.854201 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:45.854240 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:45.871755 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:45.871787 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:45.937315 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:45.929306    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.929882    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.931547    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.932074    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.933637    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:45.929306    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.929882    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.931547    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.932074    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.933637    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:45.937355 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:45.937368 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:45.966159 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:45.966197 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:48.499646 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:48.510238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:48.510330 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:48.536889 2149446 cri.go:96] found id: ""
	I1222 01:52:48.536923 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.536932 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:48.536939 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:48.537006 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:48.563218 2149446 cri.go:96] found id: ""
	I1222 01:52:48.563284 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.563305 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:48.563317 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:48.563393 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:48.588527 2149446 cri.go:96] found id: ""
	I1222 01:52:48.588561 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.588570 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:48.588576 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:48.588643 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:48.621974 2149446 cri.go:96] found id: ""
	I1222 01:52:48.622045 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.622080 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:48.622106 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:48.622197 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:48.654895 2149446 cri.go:96] found id: ""
	I1222 01:52:48.654970 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.654992 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:48.655014 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:48.655102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:48.692809 2149446 cri.go:96] found id: ""
	I1222 01:52:48.692885 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.692911 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:48.692931 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:48.693056 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:48.717194 2149446 cri.go:96] found id: ""
	I1222 01:52:48.717272 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.717294 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:48.717314 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:48.717425 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:48.742591 2149446 cri.go:96] found id: ""
	I1222 01:52:48.742616 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.742625 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:48.742633 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:48.742663 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:48.801798 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:48.794377    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.794846    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796207    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796539    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.798000    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:48.794377    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.794846    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796207    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796539    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.798000    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:48.801831 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:48.801843 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:48.828911 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:48.828944 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:48.860768 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:48.860798 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:48.925684 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:48.925719 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:51.443762 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:51.454909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:51.454993 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:51.484486 2149446 cri.go:96] found id: ""
	I1222 01:52:51.484511 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.484520 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:51.484526 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:51.484590 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:51.510790 2149446 cri.go:96] found id: ""
	I1222 01:52:51.510814 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.510822 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:51.510828 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:51.510886 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:51.537007 2149446 cri.go:96] found id: ""
	I1222 01:52:51.537030 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.537039 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:51.537050 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:51.537110 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:51.563454 2149446 cri.go:96] found id: ""
	I1222 01:52:51.563479 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.563487 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:51.563494 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:51.563551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:51.590407 2149446 cri.go:96] found id: ""
	I1222 01:52:51.590433 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.590442 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:51.590448 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:51.590512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:51.626498 2149446 cri.go:96] found id: ""
	I1222 01:52:51.626525 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.626533 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:51.626540 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:51.626604 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:51.653575 2149446 cri.go:96] found id: ""
	I1222 01:52:51.653601 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.653610 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:51.653616 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:51.653676 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:51.686395 2149446 cri.go:96] found id: ""
	I1222 01:52:51.686425 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.686435 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:51.686444 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:51.686457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:51.756701 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:51.756737 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:51.773423 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:51.773525 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:51.837429 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:51.829362    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.830109    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.831677    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.832019    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.833581    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:51.829362    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.830109    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.831677    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.832019    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.833581    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:51.837453 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:51.837467 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:51.870792 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:51.870830 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:54.401483 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:54.411791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:54.411865 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:54.439704 2149446 cri.go:96] found id: ""
	I1222 01:52:54.439729 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.439737 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:54.439743 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:54.439811 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:54.464997 2149446 cri.go:96] found id: ""
	I1222 01:52:54.465020 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.465029 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:54.465047 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:54.465112 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:54.489955 2149446 cri.go:96] found id: ""
	I1222 01:52:54.489979 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.489988 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:54.489994 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:54.490051 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:54.520365 2149446 cri.go:96] found id: ""
	I1222 01:52:54.520390 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.520400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:54.520406 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:54.520463 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:54.546850 2149446 cri.go:96] found id: ""
	I1222 01:52:54.546875 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.546883 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:54.546890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:54.546971 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:54.572315 2149446 cri.go:96] found id: ""
	I1222 01:52:54.572353 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.572363 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:54.572385 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:54.572466 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:54.606370 2149446 cri.go:96] found id: ""
	I1222 01:52:54.606405 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.606414 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:54.606421 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:54.606494 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:54.640572 2149446 cri.go:96] found id: ""
	I1222 01:52:54.640598 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.640607 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:54.640616 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:54.640628 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:54.708604 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:54.700120    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.700711    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.702391    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.703169    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.704748    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:54.700120    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.700711    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.702391    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.703169    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.704748    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:54.708624 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:54.708637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:54.737164 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:54.737199 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:54.768207 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:54.768291 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:54.833444 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:54.833481 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:57.350685 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:57.362041 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:57.362120 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:57.395509 2149446 cri.go:96] found id: ""
	I1222 01:52:57.395533 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.395542 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:57.395548 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:57.395606 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:57.422756 2149446 cri.go:96] found id: ""
	I1222 01:52:57.422778 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.422787 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:57.422794 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:57.422856 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:57.449198 2149446 cri.go:96] found id: ""
	I1222 01:52:57.449274 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.449297 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:57.449332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:57.449410 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:57.475089 2149446 cri.go:96] found id: ""
	I1222 01:52:57.475116 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.475136 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:57.475142 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:57.475239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:57.500721 2149446 cri.go:96] found id: ""
	I1222 01:52:57.500747 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.500755 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:57.500763 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:57.500823 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:57.526256 2149446 cri.go:96] found id: ""
	I1222 01:52:57.526283 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.526292 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:57.526299 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:57.526361 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:57.551020 2149446 cri.go:96] found id: ""
	I1222 01:52:57.551098 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.551122 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:57.551135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:57.551251 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:57.578237 2149446 cri.go:96] found id: ""
	I1222 01:52:57.578264 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.578272 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:57.578281 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:57.578293 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:57.647976 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:57.648013 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:57.669609 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:57.669637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:57.736734 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:57.727886    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.728902    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730342    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730664    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.732062    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:57.727886    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.728902    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730342    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730664    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.732062    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:57.736758 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:57.736777 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:57.765104 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:57.765143 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:00.296700 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:00.314134 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:00.314362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:00.360664 2149446 cri.go:96] found id: ""
	I1222 01:53:00.360694 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.360703 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:00.360709 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:00.360787 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:00.400391 2149446 cri.go:96] found id: ""
	I1222 01:53:00.400434 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.400447 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:00.400458 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:00.400534 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:00.437859 2149446 cri.go:96] found id: ""
	I1222 01:53:00.437883 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.437892 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:00.437899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:00.437960 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:00.468446 2149446 cri.go:96] found id: ""
	I1222 01:53:00.468514 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.468536 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:00.468555 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:00.468641 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:00.494431 2149446 cri.go:96] found id: ""
	I1222 01:53:00.494462 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.494471 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:00.494486 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:00.494564 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:00.520633 2149446 cri.go:96] found id: ""
	I1222 01:53:00.520660 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.520669 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:00.520675 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:00.520735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:00.552618 2149446 cri.go:96] found id: ""
	I1222 01:53:00.552646 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.552655 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:00.552661 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:00.552721 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:00.578469 2149446 cri.go:96] found id: ""
	I1222 01:53:00.578492 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.578500 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:00.578509 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:00.578523 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:00.650905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:00.650939 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:00.669192 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:00.669220 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:00.732631 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:00.724784    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.725580    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727143    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727454    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.728888    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:00.724784    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.725580    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727143    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727454    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.728888    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:00.732650 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:00.732662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:00.761586 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:00.761617 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:03.289824 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:03.300567 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:03.300641 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:03.326667 2149446 cri.go:96] found id: ""
	I1222 01:53:03.326690 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.326698 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:03.326705 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:03.326763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:03.363614 2149446 cri.go:96] found id: ""
	I1222 01:53:03.363637 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.363652 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:03.363662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:03.363743 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:03.391010 2149446 cri.go:96] found id: ""
	I1222 01:53:03.391033 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.391047 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:03.391054 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:03.391126 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:03.419391 2149446 cri.go:96] found id: ""
	I1222 01:53:03.419418 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.419426 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:03.419433 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:03.419494 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:03.445436 2149446 cri.go:96] found id: ""
	I1222 01:53:03.445461 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.445470 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:03.445476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:03.445534 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:03.475787 2149446 cri.go:96] found id: ""
	I1222 01:53:03.475812 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.475821 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:03.475827 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:03.475894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:03.502216 2149446 cri.go:96] found id: ""
	I1222 01:53:03.502247 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.502256 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:03.502262 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:03.502323 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:03.532273 2149446 cri.go:96] found id: ""
	I1222 01:53:03.532294 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.532303 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:03.532312 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:03.532323 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:03.548937 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:03.548964 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:03.633734 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:03.626155    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.626558    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628013    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628303    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.629725    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:03.626155    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.626558    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628013    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628303    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.629725    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:03.633808 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:03.633837 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:03.665918 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:03.665950 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:03.698908 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:03.698933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:06.272029 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:06.282999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:06.283072 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:06.309858 2149446 cri.go:96] found id: ""
	I1222 01:53:06.309882 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.309890 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:06.309896 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:06.309954 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:06.334809 2149446 cri.go:96] found id: ""
	I1222 01:53:06.334837 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.334845 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:06.334852 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:06.334922 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:06.366780 2149446 cri.go:96] found id: ""
	I1222 01:53:06.366807 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.366816 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:06.366823 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:06.366884 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:06.396941 2149446 cri.go:96] found id: ""
	I1222 01:53:06.397018 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.397040 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:06.397057 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:06.397162 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:06.424268 2149446 cri.go:96] found id: ""
	I1222 01:53:06.424294 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.424302 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:06.424309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:06.424384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:06.451699 2149446 cri.go:96] found id: ""
	I1222 01:53:06.451723 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.451732 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:06.451739 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:06.451799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:06.480588 2149446 cri.go:96] found id: ""
	I1222 01:53:06.480610 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.480618 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:06.480624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:06.480685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:06.508393 2149446 cri.go:96] found id: ""
	I1222 01:53:06.508419 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.508428 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:06.508437 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:06.508449 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:06.525070 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:06.525101 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:06.590361 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:06.581622    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.582460    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.583922    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.584374    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.586045    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:06.581622    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.582460    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.583922    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.584374    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.586045    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:06.590383 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:06.590398 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:06.624081 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:06.624115 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:06.663556 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:06.663585 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:09.229815 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:09.240655 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:09.240737 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:09.265982 2149446 cri.go:96] found id: ""
	I1222 01:53:09.266004 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.266013 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:09.266019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:09.266078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:09.291816 2149446 cri.go:96] found id: ""
	I1222 01:53:09.291841 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.291849 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:09.291855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:09.291914 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:09.321089 2149446 cri.go:96] found id: ""
	I1222 01:53:09.321114 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.321123 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:09.321129 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:09.321187 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:09.351795 2149446 cri.go:96] found id: ""
	I1222 01:53:09.351824 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.351833 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:09.351839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:09.351897 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:09.386800 2149446 cri.go:96] found id: ""
	I1222 01:53:09.386828 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.386837 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:09.386844 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:09.386902 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:09.416549 2149446 cri.go:96] found id: ""
	I1222 01:53:09.416574 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.416582 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:09.416589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:09.416648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:09.442626 2149446 cri.go:96] found id: ""
	I1222 01:53:09.442649 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.442662 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:09.442671 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:09.442770 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:09.468113 2149446 cri.go:96] found id: ""
	I1222 01:53:09.468187 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.468209 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:09.468230 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:09.468270 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:09.531958 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:09.523337    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.523854    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.525572    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.526159    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.527780    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:09.523337    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.523854    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.525572    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.526159    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.527780    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:09.531982 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:09.531994 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:09.559710 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:09.559743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:09.601932 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:09.602030 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:09.676284 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:09.676317 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:12.195759 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:12.206480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:12.206553 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:12.235418 2149446 cri.go:96] found id: ""
	I1222 01:53:12.235491 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.235513 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:12.235527 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:12.235611 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:12.261441 2149446 cri.go:96] found id: ""
	I1222 01:53:12.261467 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.261476 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:12.261482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:12.261551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:12.291019 2149446 cri.go:96] found id: ""
	I1222 01:53:12.291044 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.291053 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:12.291059 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:12.291117 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:12.317108 2149446 cri.go:96] found id: ""
	I1222 01:53:12.317132 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.317152 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:12.317160 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:12.317222 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:12.345136 2149446 cri.go:96] found id: ""
	I1222 01:53:12.345164 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.345173 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:12.345180 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:12.345239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:12.372959 2149446 cri.go:96] found id: ""
	I1222 01:53:12.372991 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.373000 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:12.373006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:12.373076 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:12.413013 2149446 cri.go:96] found id: ""
	I1222 01:53:12.413049 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.413058 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:12.413065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:12.413134 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:12.439197 2149446 cri.go:96] found id: ""
	I1222 01:53:12.439225 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.439235 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:12.439245 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:12.439256 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:12.469256 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:12.469285 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:12.534711 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:12.534748 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:12.551566 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:12.551637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:12.624758 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:12.616140    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.617087    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.618606    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.619192    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.620695    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:12.616140    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.617087    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.618606    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.619192    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.620695    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:12.624828 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:12.624858 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:15.157013 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:15.168255 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:15.168328 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:15.195462 2149446 cri.go:96] found id: ""
	I1222 01:53:15.195486 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.195494 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:15.195501 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:15.195560 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:15.227170 2149446 cri.go:96] found id: ""
	I1222 01:53:15.227198 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.227208 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:15.227215 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:15.227282 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:15.254951 2149446 cri.go:96] found id: ""
	I1222 01:53:15.254975 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.254983 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:15.254990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:15.255073 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:15.280554 2149446 cri.go:96] found id: ""
	I1222 01:53:15.280577 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.280586 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:15.280592 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:15.280677 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:15.306166 2149446 cri.go:96] found id: ""
	I1222 01:53:15.306189 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.306198 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:15.306203 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:15.306261 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:15.331390 2149446 cri.go:96] found id: ""
	I1222 01:53:15.331414 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.331422 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:15.331429 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:15.331487 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:15.372404 2149446 cri.go:96] found id: ""
	I1222 01:53:15.372433 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.372442 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:15.372449 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:15.372512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:15.404854 2149446 cri.go:96] found id: ""
	I1222 01:53:15.404894 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.404902 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:15.404918 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:15.404930 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:15.469796 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:15.469833 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:15.487192 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:15.487222 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:15.550547 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:15.541963    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.542670    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.544415    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.545061    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.546601    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:15.541963    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.542670    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.544415    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.545061    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.546601    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:15.550568 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:15.550582 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:15.581523 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:15.581556 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:18.130208 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:18.141246 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:18.141343 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:18.168196 2149446 cri.go:96] found id: ""
	I1222 01:53:18.168219 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.168227 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:18.168234 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:18.168294 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:18.194001 2149446 cri.go:96] found id: ""
	I1222 01:53:18.194025 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.194034 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:18.194040 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:18.194101 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:18.220324 2149446 cri.go:96] found id: ""
	I1222 01:53:18.220354 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.220362 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:18.220369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:18.220427 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:18.249998 2149446 cri.go:96] found id: ""
	I1222 01:53:18.250027 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.250036 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:18.250042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:18.250102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:18.275763 2149446 cri.go:96] found id: ""
	I1222 01:53:18.275789 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.275798 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:18.275805 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:18.275863 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:18.304972 2149446 cri.go:96] found id: ""
	I1222 01:53:18.305001 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.305009 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:18.305015 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:18.305074 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:18.330117 2149446 cri.go:96] found id: ""
	I1222 01:53:18.330139 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.330147 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:18.330154 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:18.330214 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:18.368632 2149446 cri.go:96] found id: ""
	I1222 01:53:18.368663 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.368672 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:18.368680 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:18.368727 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:18.442905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:18.442946 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:18.459582 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:18.459658 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:18.528179 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:18.516076    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.516516    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.522440    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.523458    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.524496    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:18.516076    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.516516    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.522440    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.523458    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.524496    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:18.528201 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:18.528213 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:18.555680 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:18.555710 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:21.084080 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:21.094603 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:21.094678 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:21.118899 2149446 cri.go:96] found id: ""
	I1222 01:53:21.118926 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.118936 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:21.118943 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:21.119001 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:21.147861 2149446 cri.go:96] found id: ""
	I1222 01:53:21.147890 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.147898 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:21.147905 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:21.147968 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:21.172300 2149446 cri.go:96] found id: ""
	I1222 01:53:21.172328 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.172344 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:21.172350 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:21.172408 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:21.196518 2149446 cri.go:96] found id: ""
	I1222 01:53:21.196547 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.196555 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:21.196561 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:21.196621 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:21.222351 2149446 cri.go:96] found id: ""
	I1222 01:53:21.222373 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.222381 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:21.222387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:21.222445 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:21.247014 2149446 cri.go:96] found id: ""
	I1222 01:53:21.247056 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.247065 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:21.247072 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:21.247130 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:21.272507 2149446 cri.go:96] found id: ""
	I1222 01:53:21.272530 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.272538 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:21.272545 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:21.272603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:21.298865 2149446 cri.go:96] found id: ""
	I1222 01:53:21.298930 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.298955 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:21.298969 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:21.298981 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:21.375967 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:21.367288   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.368097   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.369766   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.370303   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.371978   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:21.367288   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.368097   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.369766   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.370303   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.371978   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:21.375990 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:21.376003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:21.407155 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:21.407194 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:21.439948 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:21.439978 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:21.506644 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:21.506681 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:24.027394 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:24.038461 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:24.038535 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:24.065524 2149446 cri.go:96] found id: ""
	I1222 01:53:24.065549 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.065558 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:24.065564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:24.065631 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:24.096630 2149446 cri.go:96] found id: ""
	I1222 01:53:24.096660 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.096670 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:24.096677 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:24.096744 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:24.122428 2149446 cri.go:96] found id: ""
	I1222 01:53:24.122453 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.122462 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:24.122468 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:24.122533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:24.147612 2149446 cri.go:96] found id: ""
	I1222 01:53:24.147634 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.147642 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:24.147649 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:24.147713 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:24.176841 2149446 cri.go:96] found id: ""
	I1222 01:53:24.176913 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.176933 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:24.176954 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:24.177042 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:24.206047 2149446 cri.go:96] found id: ""
	I1222 01:53:24.206073 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.206082 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:24.206089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:24.206148 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:24.232264 2149446 cri.go:96] found id: ""
	I1222 01:53:24.232292 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.232301 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:24.232307 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:24.232376 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:24.258596 2149446 cri.go:96] found id: ""
	I1222 01:53:24.258624 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.258633 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:24.258642 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:24.258655 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:24.286162 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:24.286196 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:24.314664 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:24.314690 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:24.384317 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:24.384405 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:24.404914 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:24.404992 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:24.478461 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:24.470772   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.471164   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472646   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472964   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.474221   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:24.470772   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.471164   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472646   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472964   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.474221   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:26.978694 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:26.989065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:26.989132 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:27.017546 2149446 cri.go:96] found id: ""
	I1222 01:53:27.017577 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.017586 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:27.017593 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:27.017662 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:27.047826 2149446 cri.go:96] found id: ""
	I1222 01:53:27.047855 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.047863 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:27.047871 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:27.047931 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:27.073912 2149446 cri.go:96] found id: ""
	I1222 01:53:27.073940 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.073952 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:27.073958 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:27.074018 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:27.098951 2149446 cri.go:96] found id: ""
	I1222 01:53:27.098974 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.098982 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:27.098988 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:27.099047 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:27.124715 2149446 cri.go:96] found id: ""
	I1222 01:53:27.124739 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.124748 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:27.124754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:27.124817 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:27.150398 2149446 cri.go:96] found id: ""
	I1222 01:53:27.150420 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.150431 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:27.150437 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:27.150502 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:27.176315 2149446 cri.go:96] found id: ""
	I1222 01:53:27.176347 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.176356 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:27.176386 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:27.176493 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:27.203872 2149446 cri.go:96] found id: ""
	I1222 01:53:27.203953 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.203977 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:27.204000 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:27.204034 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:27.237039 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:27.237067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:27.301398 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:27.301432 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:27.317630 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:27.317657 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:27.393864 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:27.383878   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.385765   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.387565   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.388163   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.389778   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:27.383878   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.385765   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.387565   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.388163   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.389778   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:27.393934 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:27.393960 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:29.927573 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:29.938417 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:29.938524 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:29.964036 2149446 cri.go:96] found id: ""
	I1222 01:53:29.964065 2149446 logs.go:282] 0 containers: []
	W1222 01:53:29.964073 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:29.964080 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:29.964138 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:29.988604 2149446 cri.go:96] found id: ""
	I1222 01:53:29.988632 2149446 logs.go:282] 0 containers: []
	W1222 01:53:29.988641 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:29.988647 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:29.988708 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:30.026459 2149446 cri.go:96] found id: ""
	I1222 01:53:30.026487 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.026497 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:30.026504 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:30.026573 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:30.057912 2149446 cri.go:96] found id: ""
	I1222 01:53:30.057939 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.057949 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:30.057955 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:30.058030 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:30.083599 2149446 cri.go:96] found id: ""
	I1222 01:53:30.083668 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.083690 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:30.083709 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:30.083806 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:30.110551 2149446 cri.go:96] found id: ""
	I1222 01:53:30.110620 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.110641 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:30.110655 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:30.110735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:30.137174 2149446 cri.go:96] found id: ""
	I1222 01:53:30.137203 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.137211 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:30.137218 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:30.137277 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:30.162716 2149446 cri.go:96] found id: ""
	I1222 01:53:30.162753 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.162763 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:30.162772 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:30.162791 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:30.227759 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:30.227794 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:30.245443 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:30.245482 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:30.307769 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:30.299415   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.299980   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301457   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301852   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.303320   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:30.299415   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.299980   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301457   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301852   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.303320   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:30.307792 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:30.307803 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:30.335176 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:30.335210 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:32.871091 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:32.881783 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:32.881907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:32.916036 2149446 cri.go:96] found id: ""
	I1222 01:53:32.916106 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.916128 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:32.916147 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:32.916230 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:32.943595 2149446 cri.go:96] found id: ""
	I1222 01:53:32.943619 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.943627 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:32.943634 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:32.943692 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:32.968836 2149446 cri.go:96] found id: ""
	I1222 01:53:32.968860 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.968868 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:32.968874 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:32.968956 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:32.997882 2149446 cri.go:96] found id: ""
	I1222 01:53:32.997950 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.997973 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:32.997985 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:32.998068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:33.026110 2149446 cri.go:96] found id: ""
	I1222 01:53:33.026137 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.026145 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:33.026152 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:33.026229 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:33.055281 2149446 cri.go:96] found id: ""
	I1222 01:53:33.055306 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.055315 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:33.055321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:33.055398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:33.086387 2149446 cri.go:96] found id: ""
	I1222 01:53:33.086465 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.086486 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:33.086499 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:33.086576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:33.112687 2149446 cri.go:96] found id: ""
	I1222 01:53:33.112730 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.112740 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:33.112749 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:33.112760 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:33.181024 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:33.181067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:33.198158 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:33.198190 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:33.273247 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:33.265089   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.265940   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.267575   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.268051   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.269666   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:33.265089   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.265940   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.267575   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.268051   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.269666   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:33.273341 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:33.273378 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:33.301159 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:33.301190 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:35.836720 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:35.847980 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:35.848054 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:35.879849 2149446 cri.go:96] found id: ""
	I1222 01:53:35.879879 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.879888 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:35.879894 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:35.879956 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:35.915400 2149446 cri.go:96] found id: ""
	I1222 01:53:35.915426 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.915434 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:35.915441 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:35.915511 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:35.940054 2149446 cri.go:96] found id: ""
	I1222 01:53:35.940078 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.940086 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:35.940093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:35.940153 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:35.970602 2149446 cri.go:96] found id: ""
	I1222 01:53:35.970625 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.970634 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:35.970641 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:35.970699 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:36.000035 2149446 cri.go:96] found id: ""
	I1222 01:53:36.000060 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.000069 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:36.000076 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:36.000146 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:36.030030 2149446 cri.go:96] found id: ""
	I1222 01:53:36.030062 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.030072 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:36.030085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:36.030160 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:36.056509 2149446 cri.go:96] found id: ""
	I1222 01:53:36.056535 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.056547 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:36.056554 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:36.056615 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:36.083227 2149446 cri.go:96] found id: ""
	I1222 01:53:36.083254 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.083262 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:36.083271 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:36.083282 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:36.112082 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:36.112120 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:36.144537 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:36.144568 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:36.209908 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:36.209989 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:36.228268 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:36.228349 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:36.290580 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:36.282527   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.283074   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284509   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284836   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.286252   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:36.282527   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.283074   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284509   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284836   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.286252   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:38.791376 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:38.803195 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:38.803268 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:38.827957 2149446 cri.go:96] found id: ""
	I1222 01:53:38.827983 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.827991 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:38.827997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:38.828059 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:38.858193 2149446 cri.go:96] found id: ""
	I1222 01:53:38.858219 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.858232 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:38.858238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:38.858300 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:38.892136 2149446 cri.go:96] found id: ""
	I1222 01:53:38.892168 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.892177 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:38.892184 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:38.892262 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:38.921915 2149446 cri.go:96] found id: ""
	I1222 01:53:38.921984 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.921999 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:38.922008 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:38.922067 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:38.947824 2149446 cri.go:96] found id: ""
	I1222 01:53:38.947847 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.947855 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:38.947862 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:38.947921 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:38.972933 2149446 cri.go:96] found id: ""
	I1222 01:53:38.972959 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.972967 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:38.972974 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:38.973039 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:39.007055 2149446 cri.go:96] found id: ""
	I1222 01:53:39.007083 2149446 logs.go:282] 0 containers: []
	W1222 01:53:39.007092 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:39.007099 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:39.007162 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:39.033180 2149446 cri.go:96] found id: ""
	I1222 01:53:39.033210 2149446 logs.go:282] 0 containers: []
	W1222 01:53:39.033219 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:39.033228 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:39.033240 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:39.098988 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:39.099022 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:39.117169 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:39.117195 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:39.179213 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:39.171486   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.172049   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.173620   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.174041   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.175480   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:39.171486   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.172049   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.173620   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.174041   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.175480   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:39.179238 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:39.179250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:39.207330 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:39.207363 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:41.736154 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:41.748704 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:41.748779 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:41.774614 2149446 cri.go:96] found id: ""
	I1222 01:53:41.774640 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.774658 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:41.774668 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:41.774726 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:41.804623 2149446 cri.go:96] found id: ""
	I1222 01:53:41.804649 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.804658 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:41.804665 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:41.804723 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:41.833829 2149446 cri.go:96] found id: ""
	I1222 01:53:41.833856 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.833865 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:41.833872 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:41.833936 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:41.875465 2149446 cri.go:96] found id: ""
	I1222 01:53:41.875492 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.875500 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:41.875507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:41.875571 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:41.905442 2149446 cri.go:96] found id: ""
	I1222 01:53:41.905467 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.905476 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:41.905482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:41.905570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:41.932195 2149446 cri.go:96] found id: ""
	I1222 01:53:41.932222 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.932231 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:41.932238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:41.932323 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:41.958757 2149446 cri.go:96] found id: ""
	I1222 01:53:41.958781 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.958795 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:41.958801 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:41.958884 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:41.984634 2149446 cri.go:96] found id: ""
	I1222 01:53:41.984658 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.984666 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:41.984683 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:41.984711 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:42.012813 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:42.012851 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:42.047278 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:42.047308 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:42.115243 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:42.115288 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:42.152682 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:42.152891 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:42.276937 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:42.267520   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.268654   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.269846   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.270701   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.271864   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:42.267520   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.268654   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.269846   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.270701   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.271864   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:44.778660 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:44.788968 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:44.789038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:44.814474 2149446 cri.go:96] found id: ""
	I1222 01:53:44.814498 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.814507 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:44.814514 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:44.814572 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:44.838957 2149446 cri.go:96] found id: ""
	I1222 01:53:44.838981 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.838990 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:44.838997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:44.839055 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:44.871015 2149446 cri.go:96] found id: ""
	I1222 01:53:44.871040 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.871048 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:44.871055 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:44.871126 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:44.901830 2149446 cri.go:96] found id: ""
	I1222 01:53:44.901852 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.901860 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:44.901866 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:44.901926 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:44.932837 2149446 cri.go:96] found id: ""
	I1222 01:53:44.932912 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.932936 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:44.932954 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:44.933034 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:44.958646 2149446 cri.go:96] found id: ""
	I1222 01:53:44.958670 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.958679 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:44.958685 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:44.958745 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:44.990447 2149446 cri.go:96] found id: ""
	I1222 01:53:44.990470 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.990479 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:44.990485 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:44.990551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:45.030189 2149446 cri.go:96] found id: ""
	I1222 01:53:45.030216 2149446 logs.go:282] 0 containers: []
	W1222 01:53:45.030226 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:45.030236 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:45.030250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:45.105196 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:45.105243 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:45.127602 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:45.127641 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:45.224662 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:45.212653   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.213852   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.215129   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.216095   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.218056   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:45.212653   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.213852   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.215129   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.216095   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.218056   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:45.224690 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:45.224706 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:45.294896 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:45.294933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:47.831558 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:47.843230 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:47.843313 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:47.888291 2149446 cri.go:96] found id: ""
	I1222 01:53:47.888314 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.888323 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:47.888329 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:47.888398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:47.921941 2149446 cri.go:96] found id: ""
	I1222 01:53:47.921965 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.921973 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:47.921979 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:47.922038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:47.948316 2149446 cri.go:96] found id: ""
	I1222 01:53:47.948349 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.948358 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:47.948373 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:47.948449 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:47.975804 2149446 cri.go:96] found id: ""
	I1222 01:53:47.975882 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.975898 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:47.975905 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:47.975969 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:48.017770 2149446 cri.go:96] found id: ""
	I1222 01:53:48.017798 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.017807 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:48.017814 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:48.017876 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:48.045819 2149446 cri.go:96] found id: ""
	I1222 01:53:48.045844 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.045853 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:48.045862 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:48.045924 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:48.072420 2149446 cri.go:96] found id: ""
	I1222 01:53:48.072446 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.072454 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:48.072467 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:48.072530 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:48.098359 2149446 cri.go:96] found id: ""
	I1222 01:53:48.098384 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.098392 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:48.098401 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:48.098443 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:48.115484 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:48.115515 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:48.182899 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:48.174594   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.175246   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.176834   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.177351   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.178997   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:48.174594   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.175246   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.176834   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.177351   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.178997   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:48.182970 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:48.182991 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:48.211848 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:48.211883 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:48.240534 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:48.240647 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:50.807734 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:50.819902 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:50.819972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:50.856953 2149446 cri.go:96] found id: ""
	I1222 01:53:50.856980 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.856989 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:50.856995 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:50.857057 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:50.893636 2149446 cri.go:96] found id: ""
	I1222 01:53:50.893665 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.893675 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:50.893681 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:50.893742 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:50.923962 2149446 cri.go:96] found id: ""
	I1222 01:53:50.923991 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.924000 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:50.924006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:50.924066 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:50.949904 2149446 cri.go:96] found id: ""
	I1222 01:53:50.949931 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.949940 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:50.949946 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:50.950007 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:50.979592 2149446 cri.go:96] found id: ""
	I1222 01:53:50.979619 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.979627 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:50.979633 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:50.979696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:51.006796 2149446 cri.go:96] found id: ""
	I1222 01:53:51.006823 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.006831 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:51.006838 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:51.006907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:51.036723 2149446 cri.go:96] found id: ""
	I1222 01:53:51.036750 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.036758 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:51.036765 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:51.036826 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:51.066606 2149446 cri.go:96] found id: ""
	I1222 01:53:51.066630 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.066639 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:51.066648 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:51.066661 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:51.132268 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:51.132306 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:51.150033 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:51.150064 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:51.221006 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:51.211594   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.212356   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.214275   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215136   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215987   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:51.211594   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.212356   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.214275   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215136   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215987   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:51.221029 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:51.221042 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:51.249405 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:51.249437 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:53.781794 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:53.792491 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:53.792562 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:53.821565 2149446 cri.go:96] found id: ""
	I1222 01:53:53.821592 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.821601 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:53.821607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:53.821664 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:53.858004 2149446 cri.go:96] found id: ""
	I1222 01:53:53.858026 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.858035 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:53.858042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:53.858102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:53.884816 2149446 cri.go:96] found id: ""
	I1222 01:53:53.884840 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.884848 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:53.884855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:53.884917 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:53.910690 2149446 cri.go:96] found id: ""
	I1222 01:53:53.910763 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.910785 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:53.910806 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:53.910896 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:53.938355 2149446 cri.go:96] found id: ""
	I1222 01:53:53.938382 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.938392 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:53.938398 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:53.938484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:53.968188 2149446 cri.go:96] found id: ""
	I1222 01:53:53.968220 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.968229 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:53.968236 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:53.968344 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:53.997318 2149446 cri.go:96] found id: ""
	I1222 01:53:53.997373 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.997382 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:53.997388 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:53.997462 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:54.025412 2149446 cri.go:96] found id: ""
	I1222 01:53:54.025514 2149446 logs.go:282] 0 containers: []
	W1222 01:53:54.025538 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:54.025561 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:54.025608 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:54.093935 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:54.085512   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.086255   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.087980   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.088420   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.090003   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:54.085512   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.086255   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.087980   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.088420   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.090003   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:54.093957 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:54.093969 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:54.123758 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:54.123795 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:54.153879 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:54.153908 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:54.220347 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:54.220396 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:56.737648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:56.748600 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:56.748674 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:56.774130 2149446 cri.go:96] found id: ""
	I1222 01:53:56.774156 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.774164 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:56.774171 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:56.774232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:56.800427 2149446 cri.go:96] found id: ""
	I1222 01:53:56.800458 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.800466 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:56.800472 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:56.800533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:56.825296 2149446 cri.go:96] found id: ""
	I1222 01:53:56.825341 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.825352 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:56.825359 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:56.825417 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:56.859965 2149446 cri.go:96] found id: ""
	I1222 01:53:56.859992 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.860000 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:56.860007 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:56.860070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:56.893738 2149446 cri.go:96] found id: ""
	I1222 01:53:56.893764 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.893773 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:56.893779 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:56.893840 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:56.922780 2149446 cri.go:96] found id: ""
	I1222 01:53:56.922805 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.922814 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:56.922820 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:56.922882 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:56.951835 2149446 cri.go:96] found id: ""
	I1222 01:53:56.951865 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.951874 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:56.951881 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:56.951942 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:56.982753 2149446 cri.go:96] found id: ""
	I1222 01:53:56.982788 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.982802 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:56.982832 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:56.982859 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:57.054734 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:57.054771 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:57.071366 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:57.071447 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:57.139693 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:57.131885   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.132329   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134013   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134495   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.135751   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:57.131885   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.132329   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134013   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134495   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.135751   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:57.139715 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:57.139728 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:57.168869 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:57.168907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:59.701462 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:59.712252 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:59.712324 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:59.737831 2149446 cri.go:96] found id: ""
	I1222 01:53:59.737856 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.737865 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:59.737872 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:59.737935 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:59.765066 2149446 cri.go:96] found id: ""
	I1222 01:53:59.765090 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.765099 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:59.765105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:59.765165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:59.789855 2149446 cri.go:96] found id: ""
	I1222 01:53:59.789878 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.789887 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:59.789893 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:59.789965 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:59.815579 2149446 cri.go:96] found id: ""
	I1222 01:53:59.815601 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.815609 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:59.815616 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:59.815676 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:59.840029 2149446 cri.go:96] found id: ""
	I1222 01:53:59.840051 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.840059 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:59.840066 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:59.840127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:59.870757 2149446 cri.go:96] found id: ""
	I1222 01:53:59.870790 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.870804 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:59.870815 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:59.870883 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:59.899916 2149446 cri.go:96] found id: ""
	I1222 01:53:59.899941 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.899954 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:59.899962 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:59.900029 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:59.927188 2149446 cri.go:96] found id: ""
	I1222 01:53:59.927221 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.927230 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:59.927239 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:59.927250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:59.957210 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:59.957247 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:59.988485 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:59.988515 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:00.082944 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:00.082993 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:00.133491 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:00.133527 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:00.297208 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:00.279334   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.280543   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.281714   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.282703   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.290162   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:00.279334   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.280543   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.281714   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.282703   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.290162   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:02.797491 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:02.810137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:02.810229 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:02.840273 2149446 cri.go:96] found id: ""
	I1222 01:54:02.840297 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.840305 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:02.840312 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:02.840420 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:02.875715 2149446 cri.go:96] found id: ""
	I1222 01:54:02.875739 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.875747 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:02.875754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:02.875813 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:02.919822 2149446 cri.go:96] found id: ""
	I1222 01:54:02.919846 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.919854 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:02.919860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:02.919919 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:02.948124 2149446 cri.go:96] found id: ""
	I1222 01:54:02.948146 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.948154 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:02.948160 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:02.948219 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:02.974340 2149446 cri.go:96] found id: ""
	I1222 01:54:02.974363 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.974372 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:02.974378 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:02.974438 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:02.999271 2149446 cri.go:96] found id: ""
	I1222 01:54:02.999296 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.999305 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:02.999312 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:02.999385 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:03.030970 2149446 cri.go:96] found id: ""
	I1222 01:54:03.030993 2149446 logs.go:282] 0 containers: []
	W1222 01:54:03.031002 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:03.031008 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:03.031068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:03.056002 2149446 cri.go:96] found id: ""
	I1222 01:54:03.056029 2149446 logs.go:282] 0 containers: []
	W1222 01:54:03.056038 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:03.056047 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:03.056058 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:03.125375 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:03.125411 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:03.145686 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:03.145716 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:03.218509 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:03.209113   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.210343   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.211031   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.212871   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.213440   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:03.209113   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.210343   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.211031   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.212871   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.213440   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:03.218530 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:03.218542 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:03.248642 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:03.248677 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:05.780702 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:05.791068 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:05.791146 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:05.816364 2149446 cri.go:96] found id: ""
	I1222 01:54:05.816438 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.816460 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:05.816477 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:05.816563 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:05.841188 2149446 cri.go:96] found id: ""
	I1222 01:54:05.841211 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.841220 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:05.841226 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:05.841288 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:05.881850 2149446 cri.go:96] found id: ""
	I1222 01:54:05.881872 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.881881 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:05.881887 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:05.881948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:05.916551 2149446 cri.go:96] found id: ""
	I1222 01:54:05.916573 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.916582 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:05.916589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:05.916651 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:05.943199 2149446 cri.go:96] found id: ""
	I1222 01:54:05.943226 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.943234 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:05.943240 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:05.943305 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:05.970571 2149446 cri.go:96] found id: ""
	I1222 01:54:05.970596 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.970605 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:05.970611 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:05.970671 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:05.995460 2149446 cri.go:96] found id: ""
	I1222 01:54:05.995526 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.995547 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:05.995567 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:05.995644 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:06.023405 2149446 cri.go:96] found id: ""
	I1222 01:54:06.023433 2149446 logs.go:282] 0 containers: []
	W1222 01:54:06.023442 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:06.023451 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:06.023462 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:06.052389 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:06.052429 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:06.081830 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:06.081857 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:06.148270 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:06.148310 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:06.164614 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:06.164642 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:06.229843 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:06.221234   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.222106   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.223684   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.224107   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.225591   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:06.221234   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.222106   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.223684   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.224107   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.225591   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:08.730857 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:08.741318 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:08.741413 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:08.767143 2149446 cri.go:96] found id: ""
	I1222 01:54:08.767169 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.767182 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:08.767190 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:08.767248 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:08.791625 2149446 cri.go:96] found id: ""
	I1222 01:54:08.791658 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.791667 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:08.791673 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:08.791739 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:08.816602 2149446 cri.go:96] found id: ""
	I1222 01:54:08.816636 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.816645 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:08.816652 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:08.816718 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:08.844185 2149446 cri.go:96] found id: ""
	I1222 01:54:08.844261 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.844283 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:08.844302 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:08.844404 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:08.883312 2149446 cri.go:96] found id: ""
	I1222 01:54:08.883384 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.883408 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:08.883428 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:08.883517 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:08.911421 2149446 cri.go:96] found id: ""
	I1222 01:54:08.911457 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.911466 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:08.911473 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:08.911538 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:08.942750 2149446 cri.go:96] found id: ""
	I1222 01:54:08.942775 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.942784 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:08.942791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:08.942848 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:08.967789 2149446 cri.go:96] found id: ""
	I1222 01:54:08.967813 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.967822 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:08.967831 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:08.967842 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:09.033639 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:09.033675 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:09.052146 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:09.052175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:09.116330 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:09.108758   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.109130   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110675   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110988   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.112489   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:09.108758   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.109130   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110675   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110988   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.112489   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:09.116356 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:09.116368 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:09.144973 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:09.145004 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:11.673475 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:11.684547 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:11.684626 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:11.710519 2149446 cri.go:96] found id: ""
	I1222 01:54:11.710548 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.710557 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:11.710564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:11.710626 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:11.735548 2149446 cri.go:96] found id: ""
	I1222 01:54:11.735572 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.735582 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:11.735589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:11.735654 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:11.760929 2149446 cri.go:96] found id: ""
	I1222 01:54:11.760954 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.760963 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:11.760969 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:11.761033 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:11.787093 2149446 cri.go:96] found id: ""
	I1222 01:54:11.787120 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.787128 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:11.787135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:11.787193 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:11.812911 2149446 cri.go:96] found id: ""
	I1222 01:54:11.812936 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.812945 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:11.812951 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:11.813015 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:11.838482 2149446 cri.go:96] found id: ""
	I1222 01:54:11.838505 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.838514 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:11.838520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:11.838583 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:11.866259 2149446 cri.go:96] found id: ""
	I1222 01:54:11.866286 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.866306 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:11.866313 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:11.866384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:11.893985 2149446 cri.go:96] found id: ""
	I1222 01:54:11.894008 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.894016 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:11.894025 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:11.894037 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:11.913582 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:11.913611 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:11.981873 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:11.974736   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.975183   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.976706   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.977106   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.978227   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:11.974736   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.975183   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.976706   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.977106   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.978227   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:11.981895 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:11.981907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:12.010106 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:12.010148 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:12.040439 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:12.040469 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:14.613993 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:14.626494 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:14.626566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:14.659356 2149446 cri.go:96] found id: ""
	I1222 01:54:14.659378 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.659386 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:14.659392 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:14.659453 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:14.686289 2149446 cri.go:96] found id: ""
	I1222 01:54:14.686311 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.686319 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:14.686326 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:14.686382 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:14.711394 2149446 cri.go:96] found id: ""
	I1222 01:54:14.711422 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.711432 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:14.711438 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:14.711497 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:14.739481 2149446 cri.go:96] found id: ""
	I1222 01:54:14.739551 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.739573 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:14.739587 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:14.739660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:14.768068 2149446 cri.go:96] found id: ""
	I1222 01:54:14.768090 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.768098 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:14.768105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:14.768164 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:14.794022 2149446 cri.go:96] found id: ""
	I1222 01:54:14.794045 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.794054 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:14.794061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:14.794122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:14.826016 2149446 cri.go:96] found id: ""
	I1222 01:54:14.826083 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.826098 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:14.826105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:14.826165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:14.862200 2149446 cri.go:96] found id: ""
	I1222 01:54:14.862227 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.862238 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:14.862247 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:14.862266 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:14.936812 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:14.936849 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:14.955319 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:14.955350 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:15.031542 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:15.022418   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.023307   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025087   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025579   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.027225   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:15.022418   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.023307   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025087   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025579   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.027225   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:15.031567 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:15.031581 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:15.060447 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:15.060486 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:17.589430 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:17.599883 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:17.600008 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:17.629312 2149446 cri.go:96] found id: ""
	I1222 01:54:17.629354 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.629363 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:17.629370 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:17.629430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:17.666013 2149446 cri.go:96] found id: ""
	I1222 01:54:17.666039 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.666048 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:17.666054 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:17.666114 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:17.690505 2149446 cri.go:96] found id: ""
	I1222 01:54:17.690528 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.690536 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:17.690543 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:17.690603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:17.716599 2149446 cri.go:96] found id: ""
	I1222 01:54:17.716623 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.716631 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:17.716647 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:17.716706 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:17.741489 2149446 cri.go:96] found id: ""
	I1222 01:54:17.741513 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.741522 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:17.741528 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:17.741587 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:17.770157 2149446 cri.go:96] found id: ""
	I1222 01:54:17.770183 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.770192 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:17.770198 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:17.770286 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:17.794804 2149446 cri.go:96] found id: ""
	I1222 01:54:17.794828 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.794843 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:17.794850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:17.794912 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:17.820386 2149446 cri.go:96] found id: ""
	I1222 01:54:17.820411 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.820421 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:17.820429 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:17.820441 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:17.886892 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:17.886927 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:17.906329 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:17.906358 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:17.968971 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:17.961422   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.961912   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963366   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963698   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.965104   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:17.961422   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.961912   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963366   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963698   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.965104   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:17.968994 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:17.969007 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:17.997165 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:17.997196 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:20.530538 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:20.541178 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:20.541247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:20.567334 2149446 cri.go:96] found id: ""
	I1222 01:54:20.567358 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.567367 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:20.567373 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:20.567433 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:20.594495 2149446 cri.go:96] found id: ""
	I1222 01:54:20.594519 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.594528 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:20.594534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:20.594592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:20.627810 2149446 cri.go:96] found id: ""
	I1222 01:54:20.627837 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.627846 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:20.627853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:20.627913 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:20.659100 2149446 cri.go:96] found id: ""
	I1222 01:54:20.659127 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.659136 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:20.659142 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:20.659203 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:20.684059 2149446 cri.go:96] found id: ""
	I1222 01:54:20.684083 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.684092 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:20.684099 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:20.684158 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:20.708264 2149446 cri.go:96] found id: ""
	I1222 01:54:20.708289 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.708298 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:20.708305 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:20.708373 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:20.732615 2149446 cri.go:96] found id: ""
	I1222 01:54:20.732640 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.732649 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:20.732656 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:20.732718 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:20.764171 2149446 cri.go:96] found id: ""
	I1222 01:54:20.764198 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.764207 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:20.764215 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:20.764228 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:20.780713 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:20.780742 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:20.851938 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:20.843189   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.843837   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.845556   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.846092   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.847691   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:20.843189   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.843837   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.845556   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.846092   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.847691   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:20.851961 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:20.851974 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:20.893661 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:20.893699 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:20.931494 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:20.931521 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:23.498941 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:23.509907 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:23.509983 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:23.539967 2149446 cri.go:96] found id: ""
	I1222 01:54:23.539997 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.540005 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:23.540012 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:23.540071 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:23.569856 2149446 cri.go:96] found id: ""
	I1222 01:54:23.569884 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.569894 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:23.569900 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:23.569961 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:23.604014 2149446 cri.go:96] found id: ""
	I1222 01:54:23.604082 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.604103 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:23.604124 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:23.604217 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:23.632481 2149446 cri.go:96] found id: ""
	I1222 01:54:23.632510 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.632519 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:23.632525 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:23.632586 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:23.663814 2149446 cri.go:96] found id: ""
	I1222 01:54:23.663838 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.663847 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:23.663853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:23.663916 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:23.693790 2149446 cri.go:96] found id: ""
	I1222 01:54:23.693815 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.693824 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:23.693831 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:23.693891 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:23.722928 2149446 cri.go:96] found id: ""
	I1222 01:54:23.722995 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.723009 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:23.723017 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:23.723077 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:23.749108 2149446 cri.go:96] found id: ""
	I1222 01:54:23.749133 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.749142 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:23.749151 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:23.749162 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:23.814472 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:23.814506 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:23.831927 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:23.831957 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:23.920909 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:23.912937   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.913679   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915294   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915601   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.917055   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:23.912937   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.913679   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915294   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915601   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.917055   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:23.920931 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:23.920943 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:23.949424 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:23.949460 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:26.476761 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:26.487732 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:26.487807 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:26.517071 2149446 cri.go:96] found id: ""
	I1222 01:54:26.517147 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.517162 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:26.517170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:26.517245 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:26.542078 2149446 cri.go:96] found id: ""
	I1222 01:54:26.542101 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.542110 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:26.542116 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:26.542177 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:26.567743 2149446 cri.go:96] found id: ""
	I1222 01:54:26.567767 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.567775 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:26.567782 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:26.567843 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:26.595280 2149446 cri.go:96] found id: ""
	I1222 01:54:26.595307 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.595316 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:26.595322 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:26.595382 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:26.624206 2149446 cri.go:96] found id: ""
	I1222 01:54:26.624232 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.624241 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:26.624248 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:26.624308 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:26.658482 2149446 cri.go:96] found id: ""
	I1222 01:54:26.658508 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.658516 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:26.658523 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:26.658581 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:26.686129 2149446 cri.go:96] found id: ""
	I1222 01:54:26.686152 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.686161 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:26.686168 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:26.686249 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:26.711709 2149446 cri.go:96] found id: ""
	I1222 01:54:26.711736 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.711745 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:26.711754 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:26.711764 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:26.778620 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:26.778656 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:26.795417 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:26.795446 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:26.867101 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:26.857306   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.858065   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.859736   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.860499   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.862034   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:26.857306   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.858065   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.859736   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.860499   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.862034   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:26.867125 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:26.867141 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:26.902503 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:26.902544 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:29.434518 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:29.445170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:29.445239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:29.470088 2149446 cri.go:96] found id: ""
	I1222 01:54:29.470111 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.470120 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:29.470126 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:29.470190 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:29.494531 2149446 cri.go:96] found id: ""
	I1222 01:54:29.494553 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.494562 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:29.494568 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:29.494627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:29.519108 2149446 cri.go:96] found id: ""
	I1222 01:54:29.519131 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.519139 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:29.519146 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:29.519212 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:29.544138 2149446 cri.go:96] found id: ""
	I1222 01:54:29.544162 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.544170 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:29.544176 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:29.544237 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:29.569957 2149446 cri.go:96] found id: ""
	I1222 01:54:29.569982 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.569991 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:29.569999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:29.570068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:29.601725 2149446 cri.go:96] found id: ""
	I1222 01:54:29.601748 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.601756 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:29.601763 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:29.601826 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:29.631880 2149446 cri.go:96] found id: ""
	I1222 01:54:29.631904 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.631912 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:29.631918 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:29.631981 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:29.664642 2149446 cri.go:96] found id: ""
	I1222 01:54:29.664669 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.664678 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:29.664686 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:29.664698 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:29.732517 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:29.732561 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:29.751836 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:29.751868 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:29.825936 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:29.817178   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.817918   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.819621   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.820393   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.822050   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:29.817178   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.817918   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.819621   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.820393   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.822050   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:29.825958 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:29.825971 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:29.860213 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:29.860248 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:32.399119 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:32.410321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:32.410395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:32.436535 2149446 cri.go:96] found id: ""
	I1222 01:54:32.436567 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.436576 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:32.436589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:32.436667 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:32.462497 2149446 cri.go:96] found id: ""
	I1222 01:54:32.462522 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.462531 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:32.462538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:32.462598 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:32.488532 2149446 cri.go:96] found id: ""
	I1222 01:54:32.488607 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.488631 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:32.488651 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:32.488727 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:32.513915 2149446 cri.go:96] found id: ""
	I1222 01:54:32.513939 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.513956 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:32.513980 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:32.514061 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:32.541025 2149446 cri.go:96] found id: ""
	I1222 01:54:32.541050 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.541060 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:32.541067 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:32.541130 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:32.568517 2149446 cri.go:96] found id: ""
	I1222 01:54:32.568544 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.568553 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:32.568561 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:32.568633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:32.596417 2149446 cri.go:96] found id: ""
	I1222 01:54:32.596444 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.596453 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:32.596459 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:32.596522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:32.629445 2149446 cri.go:96] found id: ""
	I1222 01:54:32.629473 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.629482 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:32.629491 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:32.629504 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:32.700379 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:32.700415 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:32.717228 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:32.717259 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:32.783507 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:32.774480   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.775245   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777042   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.779225   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:32.774480   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.775245   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777042   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.779225   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:32.783529 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:32.783541 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:32.812359 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:32.812415 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:35.344384 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:35.357066 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:35.357137 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:35.387791 2149446 cri.go:96] found id: ""
	I1222 01:54:35.387821 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.387832 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:35.387839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:35.387907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:35.413270 2149446 cri.go:96] found id: ""
	I1222 01:54:35.413294 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.413303 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:35.413309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:35.413385 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:35.438755 2149446 cri.go:96] found id: ""
	I1222 01:54:35.438783 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.438791 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:35.438798 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:35.438859 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:35.464735 2149446 cri.go:96] found id: ""
	I1222 01:54:35.464761 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.464771 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:35.464778 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:35.464838 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:35.490615 2149446 cri.go:96] found id: ""
	I1222 01:54:35.490639 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.490647 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:35.490654 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:35.490722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:35.520602 2149446 cri.go:96] found id: ""
	I1222 01:54:35.520628 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.520637 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:35.520644 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:35.520706 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:35.545887 2149446 cri.go:96] found id: ""
	I1222 01:54:35.545911 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.545919 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:35.545928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:35.545989 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:35.571922 2149446 cri.go:96] found id: ""
	I1222 01:54:35.571945 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.571953 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:35.571962 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:35.571973 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:35.641377 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:35.641458 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:35.658922 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:35.659080 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:35.721523 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:35.714066   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.714428   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.715873   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.716191   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.717743   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:35.714066   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.714428   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.715873   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.716191   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.717743   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:35.721545 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:35.721560 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:35.749350 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:35.749380 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:38.281427 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:38.293044 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:38.293127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:38.321491 2149446 cri.go:96] found id: ""
	I1222 01:54:38.321519 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.321528 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:38.321534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:38.321593 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:38.346163 2149446 cri.go:96] found id: ""
	I1222 01:54:38.346238 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.346261 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:38.346279 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:38.346367 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:38.372025 2149446 cri.go:96] found id: ""
	I1222 01:54:38.372050 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.372058 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:38.372065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:38.372133 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:38.399739 2149446 cri.go:96] found id: ""
	I1222 01:54:38.399762 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.399770 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:38.399780 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:38.399844 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:38.424497 2149446 cri.go:96] found id: ""
	I1222 01:54:38.424565 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.424589 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:38.424608 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:38.424696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:38.449671 2149446 cri.go:96] found id: ""
	I1222 01:54:38.449699 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.449708 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:38.449714 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:38.449776 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:38.477607 2149446 cri.go:96] found id: ""
	I1222 01:54:38.477674 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.477697 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:38.477710 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:38.477785 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:38.506176 2149446 cri.go:96] found id: ""
	I1222 01:54:38.506204 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.506212 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:38.506220 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:38.506231 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:38.533790 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:38.533822 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:38.561674 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:38.561700 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:38.628458 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:38.628502 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:38.649997 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:38.650027 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:38.712931 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:38.704109   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.704905   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.706476   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.707004   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.708610   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:38.704109   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.704905   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.706476   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.707004   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.708610   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:41.213505 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:41.225530 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:41.225616 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:41.255991 2149446 cri.go:96] found id: ""
	I1222 01:54:41.256021 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.256029 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:41.256036 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:41.256094 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:41.282289 2149446 cri.go:96] found id: ""
	I1222 01:54:41.282314 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.282333 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:41.282339 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:41.282419 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:41.308460 2149446 cri.go:96] found id: ""
	I1222 01:54:41.308488 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.308499 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:41.308505 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:41.308566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:41.333516 2149446 cri.go:96] found id: ""
	I1222 01:54:41.333547 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.333555 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:41.333562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:41.333620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:41.359576 2149446 cri.go:96] found id: ""
	I1222 01:54:41.359598 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.359607 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:41.359614 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:41.359672 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:41.384042 2149446 cri.go:96] found id: ""
	I1222 01:54:41.384066 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.384075 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:41.384082 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:41.384142 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:41.413519 2149446 cri.go:96] found id: ""
	I1222 01:54:41.413549 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.413559 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:41.413565 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:41.413624 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:41.454139 2149446 cri.go:96] found id: ""
	I1222 01:54:41.454162 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.454171 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:41.454180 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:41.454191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:41.482322 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:41.482355 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:41.511950 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:41.511979 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:41.577845 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:41.577882 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:41.595747 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:41.595826 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:41.676481 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:41.667994   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.668497   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670197   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670551   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.672124   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:41.667994   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.668497   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670197   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670551   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.672124   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:44.178129 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:44.189537 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:44.189613 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:44.217535 2149446 cri.go:96] found id: ""
	I1222 01:54:44.217563 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.217572 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:44.217580 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:44.217646 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:44.243271 2149446 cri.go:96] found id: ""
	I1222 01:54:44.243299 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.243308 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:44.243315 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:44.243379 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:44.272096 2149446 cri.go:96] found id: ""
	I1222 01:54:44.272120 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.272129 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:44.272135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:44.272194 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:44.298955 2149446 cri.go:96] found id: ""
	I1222 01:54:44.298983 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.298993 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:44.298999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:44.299070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:44.324703 2149446 cri.go:96] found id: ""
	I1222 01:54:44.324728 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.324737 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:44.324743 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:44.324834 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:44.350284 2149446 cri.go:96] found id: ""
	I1222 01:54:44.350307 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.350316 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:44.350322 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:44.350384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:44.378349 2149446 cri.go:96] found id: ""
	I1222 01:54:44.378375 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.378384 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:44.378390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:44.378453 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:44.408220 2149446 cri.go:96] found id: ""
	I1222 01:54:44.408244 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.408258 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:44.408271 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:44.408283 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:44.474431 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:44.474465 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:44.491072 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:44.491099 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:44.556034 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:44.547925   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.548318   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.549960   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.550258   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.551701   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:44.547925   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.548318   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.549960   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.550258   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.551701   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:44.556055 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:44.556068 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:44.584394 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:44.584431 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:47.124887 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:47.137092 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:47.137169 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:47.164240 2149446 cri.go:96] found id: ""
	I1222 01:54:47.164268 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.164277 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:47.164284 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:47.164343 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:47.189698 2149446 cri.go:96] found id: ""
	I1222 01:54:47.189723 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.189732 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:47.189739 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:47.189799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:47.215248 2149446 cri.go:96] found id: ""
	I1222 01:54:47.215276 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.215285 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:47.215291 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:47.215350 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:47.241376 2149446 cri.go:96] found id: ""
	I1222 01:54:47.241401 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.241409 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:47.241416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:47.241475 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:47.266358 2149446 cri.go:96] found id: ""
	I1222 01:54:47.266383 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.266392 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:47.266400 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:47.266477 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:47.292142 2149446 cri.go:96] found id: ""
	I1222 01:54:47.292167 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.292175 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:47.292182 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:47.292260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:47.318478 2149446 cri.go:96] found id: ""
	I1222 01:54:47.318503 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.318511 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:47.318520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:47.318581 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:47.343233 2149446 cri.go:96] found id: ""
	I1222 01:54:47.343313 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.343328 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:47.343338 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:47.343351 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:47.360045 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:47.360074 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:47.426418 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:47.417312   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.418166   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.419870   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.420511   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.422229   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:47.417312   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.418166   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.419870   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.420511   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.422229   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:47.426442 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:47.426457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:47.455495 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:47.455529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:47.485526 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:47.485555 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:50.052484 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:50.066978 2149446 out.go:203] 
	W1222 01:54:50.069810 2149446 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1222 01:54:50.069848 2149446 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1222 01:54:50.069857 2149446 out.go:285] * Related issues:
	W1222 01:54:50.069872 2149446 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1222 01:54:50.069887 2149446 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1222 01:54:50.072764 2149446 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.44871456Z" level=info msg="Using the internal default seccomp profile"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448722183Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448729674Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448735557Z" level=info msg="RDT not available in the host system"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448747987Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449934856Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449963803Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449983872Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450671237Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450694973Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450859498Z" level=info msg="Updated default CNI network name to "
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.451450093Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.451791835Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.45184593Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.49846563Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498673066Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498765978Z" level=info msg="Create NRI interface"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498873749Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498887845Z" level=info msg="runtime interface created"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498899816Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498906454Z" level=info msg="runtime interface starting up..."
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498912689Z" level=info msg="starting plugins..."
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498927811Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.49901347Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:48:46 newest-cni-704217 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:59.124626   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:59.125359   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:59.127019   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:59.127326   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:59.128802   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:54:59 up 1 day,  8:37,  0 user,  load average: 0.59, 0.69, 1.18
	Linux newest-cni-704217 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:54:56 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:54:57 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 491.
	Dec 22 01:54:57 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:57 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:57 newest-cni-704217 kubelet[13710]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:57 newest-cni-704217 kubelet[13710]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:57 newest-cni-704217 kubelet[13710]: E1222 01:54:57.399336   13710 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:54:57 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:54:57 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 492.
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:58 newest-cni-704217 kubelet[13730]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:58 newest-cni-704217 kubelet[13730]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:58 newest-cni-704217 kubelet[13730]: E1222 01:54:58.153681   13730 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 493.
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:54:58 newest-cni-704217 kubelet[13770]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:58 newest-cni-704217 kubelet[13770]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:54:58 newest-cni-704217 kubelet[13770]: E1222 01:54:58.905947   13770 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:54:58 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (337.647065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-704217" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-704217
helpers_test.go:244: (dbg) docker inspect newest-cni-704217:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	        "Created": "2025-12-22T01:38:39.533406818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2149579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:48:40.537015605Z",
	            "FinishedAt": "2025-12-22T01:48:39.100562621Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/hosts",
	        "LogPath": "/var/lib/docker/containers/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e/0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e-json.log",
	        "Name": "/newest-cni-704217",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-704217:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-704217",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d8468deabde7fc79b744e69fb4ac58de6e248cd6f8cb31bb076434fb4fc379e",
	                "LowerDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0df6ea665da597441ebc8f1eec8038b3999465971821d1986f3cd7e03feb54a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-704217",
	                "Source": "/var/lib/docker/volumes/newest-cni-704217/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-704217",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-704217",
	                "name.minikube.sigs.k8s.io": "newest-cni-704217",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14d50a92928bec4b8ab05dab0866e0253bd6e6099eb5f12680df80e7cd980f01",
	            "SandboxKey": "/var/run/docker/netns/14d50a92928b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38786"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38787"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38790"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38788"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38789"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-704217": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:54:eb:22:0d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37c5f2b876d48064d8ed7b8ba7f6e9a5833c911bae1041e13f11dbc7e56a4324",
	                    "EndpointID": "adf54697897a7d0e245c3181f4c38786594da1f48236d2a9134636be2465ea29",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-704217",
	                        "0d8468deabde"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (363.421235ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-704217 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-704217 logs -n 25: (1.57478429s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p embed-certs-235567 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p embed-certs-235567                                                                                                                                                                                                                              │ embed-certs-235567           │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ delete  │ -p disable-driver-mounts-654908                                                                                                                                                                                                                    │ disable-driver-mounts-654908 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-733235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-733235 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:37 UTC │
	│ start   │ -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:37 UTC │ 22 Dec 25 01:38 UTC │
	│ image   │ default-k8s-diff-port-733235 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ pause   │ -p default-k8s-diff-port-733235 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ delete  │ -p default-k8s-diff-port-733235                                                                                                                                                                                                                    │ default-k8s-diff-port-733235 │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-491054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:41 UTC │                     │
	│ stop    │ -p no-preload-491054 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ addons  │ enable dashboard -p no-preload-491054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │ 22 Dec 25 01:43 UTC │
	│ start   │ -p no-preload-491054 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-491054            │ jenkins │ v1.37.0 │ 22 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-704217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:46 UTC │                     │
	│ stop    │ -p newest-cni-704217 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │ 22 Dec 25 01:48 UTC │
	│ addons  │ enable dashboard -p newest-cni-704217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │ 22 Dec 25 01:48 UTC │
	│ start   │ -p newest-cni-704217 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:48 UTC │                     │
	│ image   │ newest-cni-704217 image list --format=json                                                                                                                                                                                                         │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:54 UTC │ 22 Dec 25 01:54 UTC │
	│ pause   │ -p newest-cni-704217 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:54 UTC │ 22 Dec 25 01:54 UTC │
	│ unpause │ -p newest-cni-704217 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-704217            │ jenkins │ v1.37.0 │ 22 Dec 25 01:54 UTC │ 22 Dec 25 01:54 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:48:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:48:40.242767 2149446 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:48:40.242975 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243012 2149446 out.go:374] Setting ErrFile to fd 2...
	I1222 01:48:40.243032 2149446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:48:40.243304 2149446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:48:40.243710 2149446 out.go:368] Setting JSON to false
	I1222 01:48:40.244718 2149446 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":117070,"bootTime":1766251051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:48:40.244822 2149446 start.go:143] virtualization:  
	I1222 01:48:40.247655 2149446 out.go:179] * [newest-cni-704217] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1222 01:48:35.596802 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:37.597647 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:40.096931 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:40.251418 2149446 notify.go:221] Checking for updates...
	I1222 01:48:40.254130 2149446 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:48:40.257153 2149446 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:48:40.259997 2149446 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:40.262948 2149446 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:48:40.265828 2149446 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:48:40.268754 2149446 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:48:40.272145 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:40.272826 2149446 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:48:40.305841 2149446 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:48:40.306001 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.377826 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.367957332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.377936 2149446 docker.go:319] overlay module found
	I1222 01:48:40.381097 2149446 out.go:179] * Using the docker driver based on existing profile
	I1222 01:48:40.383847 2149446 start.go:309] selected driver: docker
	I1222 01:48:40.383862 2149446 start.go:928] validating driver "docker" against &{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.383955 2149446 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:48:40.384655 2149446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:48:40.450570 2149446 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:48:40.441705374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:48:40.450907 2149446 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:48:40.450934 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:40.450985 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:40.451021 2149446 start.go:353] cluster config:
	{Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:40.454057 2149446 out.go:179] * Starting "newest-cni-704217" primary control-plane node in "newest-cni-704217" cluster
	I1222 01:48:40.456938 2149446 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 01:48:40.459827 2149446 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:48:40.462746 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:40.462788 2149446 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 01:48:40.462800 2149446 cache.go:65] Caching tarball of preloaded images
	I1222 01:48:40.462841 2149446 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:48:40.462889 2149446 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 01:48:40.462900 2149446 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 01:48:40.463007 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.482525 2149446 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:48:40.482547 2149446 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:48:40.482567 2149446 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:48:40.482597 2149446 start.go:360] acquireMachinesLock for newest-cni-704217: {Name:mka558c8479b73518c6430cb6c53badc7e0010e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:48:40.482656 2149446 start.go:364] duration metric: took 36.504µs to acquireMachinesLock for "newest-cni-704217"
	I1222 01:48:40.482682 2149446 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:48:40.482690 2149446 fix.go:54] fixHost starting: 
	I1222 01:48:40.482953 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.499420 2149446 fix.go:112] recreateIfNeeded on newest-cni-704217: state=Stopped err=<nil>
	W1222 01:48:40.499450 2149446 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:48:40.502622 2149446 out.go:252] * Restarting existing docker container for "newest-cni-704217" ...
	I1222 01:48:40.502704 2149446 cli_runner.go:164] Run: docker start newest-cni-704217
	I1222 01:48:40.772557 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:40.795810 2149446 kic.go:430] container "newest-cni-704217" state is running.
	I1222 01:48:40.796193 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:40.819335 2149446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/config.json ...
	I1222 01:48:40.819554 2149446 machine.go:94] provisionDockerMachine start ...
	I1222 01:48:40.819841 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:40.841916 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:40.842255 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:40.842265 2149446 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:48:40.844138 2149446 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:48:43.976885 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:43.976911 2149446 ubuntu.go:182] provisioning hostname "newest-cni-704217"
	I1222 01:48:43.977014 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:43.994105 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:43.994420 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:43.994436 2149446 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-704217 && echo "newest-cni-704217" | sudo tee /etc/hostname
	I1222 01:48:44.140605 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-704217
	
	I1222 01:48:44.140693 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.161016 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.161373 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.161391 2149446 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-704217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-704217/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-704217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:48:44.293770 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:48:44.293859 2149446 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 01:48:44.293905 2149446 ubuntu.go:190] setting up certificates
	I1222 01:48:44.293936 2149446 provision.go:84] configureAuth start
	I1222 01:48:44.294013 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:44.316482 2149446 provision.go:143] copyHostCerts
	I1222 01:48:44.316547 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 01:48:44.316556 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 01:48:44.316633 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 01:48:44.316728 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 01:48:44.316733 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 01:48:44.316759 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 01:48:44.316806 2149446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 01:48:44.316811 2149446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 01:48:44.316833 2149446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 01:48:44.316876 2149446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.newest-cni-704217 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-704217]
	I1222 01:48:44.455342 2149446 provision.go:177] copyRemoteCerts
	I1222 01:48:44.455417 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:48:44.455467 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.474519 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:44.573438 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:48:44.590713 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:48:44.609690 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:48:44.627314 2149446 provision.go:87] duration metric: took 333.343615ms to configureAuth
	I1222 01:48:44.627344 2149446 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:48:44.627539 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:44.627653 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.649833 2149446 main.go:144] libmachine: Using SSH client type: native
	I1222 01:48:44.650146 2149446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38786 <nil> <nil>}
	I1222 01:48:44.650167 2149446 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 01:48:44.926182 2149446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 01:48:44.926204 2149446 machine.go:97] duration metric: took 4.106640691s to provisionDockerMachine
	I1222 01:48:44.926234 2149446 start.go:293] postStartSetup for "newest-cni-704217" (driver="docker")
	I1222 01:48:44.926251 2149446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:48:44.926311 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:48:44.926356 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:44.946595 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.045827 2149446 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:48:45.062421 2149446 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:48:45.062452 2149446 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:48:45.062466 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 01:48:45.062535 2149446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 01:48:45.062625 2149446 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 01:48:45.062747 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:48:45.075486 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:45.105717 2149446 start.go:296] duration metric: took 179.465684ms for postStartSetup
	I1222 01:48:45.105860 2149446 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:48:45.105935 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.131206 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	W1222 01:48:42.097949 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:44.597158 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:45.246727 2149446 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:48:45.256769 2149446 fix.go:56] duration metric: took 4.774070481s for fixHost
	I1222 01:48:45.256799 2149446 start.go:83] releasing machines lock for "newest-cni-704217", held for 4.774128563s
	I1222 01:48:45.256887 2149446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-704217
	I1222 01:48:45.279239 2149446 ssh_runner.go:195] Run: cat /version.json
	I1222 01:48:45.279292 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.279295 2149446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:48:45.279352 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:45.302892 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.315026 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:45.495990 2149446 ssh_runner.go:195] Run: systemctl --version
	I1222 01:48:45.502719 2149446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 01:48:45.539801 2149446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:48:45.545121 2149446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:48:45.545189 2149446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:48:45.553556 2149446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:48:45.553582 2149446 start.go:496] detecting cgroup driver to use...
	I1222 01:48:45.553612 2149446 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:48:45.553661 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 01:48:45.568706 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 01:48:45.581738 2149446 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:48:45.581818 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:48:45.597902 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:48:45.614693 2149446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:48:45.735512 2149446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:48:45.846918 2149446 docker.go:234] disabling docker service ...
	I1222 01:48:45.847000 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:48:45.861997 2149446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:48:45.874967 2149446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:48:45.982721 2149446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:48:46.093378 2149446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:48:46.106471 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:48:46.120288 2149446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 01:48:46.120369 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.130327 2149446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 01:48:46.130417 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.139300 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.148385 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.157463 2149446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:48:46.165914 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.175804 2149446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.185241 2149446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 01:48:46.194290 2149446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:48:46.201953 2149446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:48:46.209542 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.327150 2149446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 01:48:46.504853 2149446 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 01:48:46.504948 2149446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 01:48:46.509123 2149446 start.go:564] Will wait 60s for crictl version
	I1222 01:48:46.509240 2149446 ssh_runner.go:195] Run: which crictl
	I1222 01:48:46.513227 2149446 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:48:46.539190 2149446 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 01:48:46.539329 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.570401 2149446 ssh_runner.go:195] Run: crio --version
	I1222 01:48:46.603161 2149446 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1222 01:48:46.606112 2149446 cli_runner.go:164] Run: docker network inspect newest-cni-704217 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:48:46.622587 2149446 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:48:46.626479 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.639159 2149446 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:48:46.642073 2149446 kubeadm.go:884] updating cluster {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:48:46.642257 2149446 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 01:48:46.642336 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.682401 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.682426 2149446 crio.go:433] Images already preloaded, skipping extraction
	I1222 01:48:46.682483 2149446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:48:46.712357 2149446 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 01:48:46.712379 2149446 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:48:46.712388 2149446 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1222 01:48:46.712483 2149446 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-704217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:48:46.712573 2149446 ssh_runner.go:195] Run: crio config
	I1222 01:48:46.797974 2149446 cni.go:84] Creating CNI manager for ""
	I1222 01:48:46.798006 2149446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 01:48:46.798025 2149446 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:48:46.798055 2149446 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-704217 NodeName:newest-cni-704217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:48:46.798231 2149446 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-704217"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:48:46.798309 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:48:46.806374 2149446 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:48:46.806461 2149446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:48:46.814198 2149446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1222 01:48:46.828274 2149446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:48:46.841483 2149446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1222 01:48:46.854374 2149446 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:48:46.857976 2149446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:48:46.868061 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:46.994841 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:47.017882 2149446 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217 for IP: 192.168.85.2
	I1222 01:48:47.017947 2149446 certs.go:195] generating shared ca certs ...
	I1222 01:48:47.017976 2149446 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.018155 2149446 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 01:48:47.018235 2149446 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 01:48:47.018271 2149446 certs.go:257] generating profile certs ...
	I1222 01:48:47.018410 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/client.key
	I1222 01:48:47.018519 2149446 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key.d42a7856
	I1222 01:48:47.018584 2149446 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key
	I1222 01:48:47.018730 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 01:48:47.018786 2149446 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 01:48:47.018810 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 01:48:47.018867 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:48:47.018918 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:48:47.018973 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 01:48:47.019053 2149446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 01:48:47.019666 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:48:47.038515 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:48:47.057188 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:48:47.080288 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:48:47.101214 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:48:47.120685 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:48:47.140350 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:48:47.160406 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/newest-cni-704217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:48:47.178292 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 01:48:47.195480 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:48:47.213647 2149446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 01:48:47.231427 2149446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:48:47.244572 2149446 ssh_runner.go:195] Run: openssl version
	I1222 01:48:47.250948 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.258307 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 01:48:47.265613 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.269481 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 01:48:47.314712 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:48:47.322028 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.329298 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:48:47.336716 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340415 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.340535 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:48:47.381670 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:48:47.389031 2149446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.396101 2149446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 01:48:47.403471 2149446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407274 2149446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.407346 2149446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 01:48:47.449487 2149446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:48:47.456987 2149446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:48:47.460641 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:48:47.501381 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:48:47.542911 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:48:47.587141 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:48:47.628220 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:48:47.670351 2149446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:48:47.715724 2149446 kubeadm.go:401] StartCluster: {Name:newest-cni-704217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-704217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:48:47.715830 2149446 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 01:48:47.715926 2149446 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:48:47.743013 2149446 cri.go:96] found id: ""
	I1222 01:48:47.743133 2149446 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:48:47.750858 2149446 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:48:47.750922 2149446 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:48:47.750979 2149446 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:48:47.758109 2149446 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:48:47.758748 2149446 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-704217" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.759013 2149446 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1874095/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-704217" cluster setting kubeconfig missing "newest-cni-704217" context setting]
	I1222 01:48:47.759473 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.760858 2149446 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:48:47.768601 2149446 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:48:47.768668 2149446 kubeadm.go:602] duration metric: took 17.733602ms to restartPrimaryControlPlane
	I1222 01:48:47.768685 2149446 kubeadm.go:403] duration metric: took 52.971018ms to StartCluster
	I1222 01:48:47.768701 2149446 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.768757 2149446 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:48:47.769685 2149446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:48:47.769905 2149446 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 01:48:47.770207 2149446 config.go:182] Loaded profile config "newest-cni-704217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 01:48:47.770291 2149446 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:48:47.770386 2149446 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-704217"
	I1222 01:48:47.770417 2149446 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-704217"
	I1222 01:48:47.770506 2149446 addons.go:70] Setting dashboard=true in profile "newest-cni-704217"
	I1222 01:48:47.770541 2149446 addons.go:70] Setting default-storageclass=true in profile "newest-cni-704217"
	I1222 01:48:47.770547 2149446 addons.go:239] Setting addon dashboard=true in "newest-cni-704217"
	W1222 01:48:47.770588 2149446 addons.go:248] addon dashboard should already be in state true
	I1222 01:48:47.770610 2149446 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-704217"
	I1222 01:48:47.770621 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.770999 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.771324 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.770518 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.772038 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.775384 2149446 out.go:179] * Verifying Kubernetes components...
	I1222 01:48:47.778350 2149446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:48:47.802410 2149446 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:48:47.805256 2149446 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:48:47.807963 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:48:47.807985 2149446 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:48:47.808059 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.828399 2149446 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:48:47.831382 2149446 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:47.831415 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:48:47.831508 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.834144 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.844610 2149446 addons.go:239] Setting addon default-storageclass=true in "newest-cni-704217"
	I1222 01:48:47.844670 2149446 host.go:66] Checking if "newest-cni-704217" exists ...
	I1222 01:48:47.845187 2149446 cli_runner.go:164] Run: docker container inspect newest-cni-704217 --format={{.State.Status}}
	I1222 01:48:47.877451 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:47.891162 2149446 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:47.891205 2149446 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:48:47.891280 2149446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-704217
	I1222 01:48:47.926288 2149446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38786 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/newest-cni-704217/id_rsa Username:docker}
	I1222 01:48:48.042819 2149446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:48:48.112488 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:48:48.112513 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:48:48.128623 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:48:48.128699 2149446 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:48:48.148283 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:48:48.148321 2149446 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:48:48.156766 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:48.161900 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:48.167407 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:48:48.167429 2149446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:48:48.203144 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:48:48.203175 2149446 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:48:48.254999 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:48:48.255073 2149446 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:48:48.269219 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:48:48.269248 2149446 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:48:48.282496 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:48:48.282561 2149446 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:48:48.295350 2149446 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.295416 2149446 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:48:48.308102 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:48.846401 2149446 api_server.go:52] waiting for apiserver process to appear ...
	W1222 01:48:48.846466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:48.846520 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:48.846522 2149446 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846636 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:48.846839 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.013510 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:49.073085 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.109119 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:48:49.170278 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.173753 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.230752 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.347104 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:49.363263 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:49.431947 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.591271 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:48:49.624743 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:49.676288 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:48:49.711818 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:49.847130 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.002166 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:50.073384 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.206474 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:46.597453 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:49.097694 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:50.265056 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.347283 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:50.486129 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:50.543791 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:50.847346 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.143881 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:51.205787 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.283923 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:51.341457 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:51.542158 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:51.609543 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:51.847044 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.045399 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:52.115058 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.347521 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:52.607702 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:52.683195 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:52.846571 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.305882 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:53.347187 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:53.376466 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.687056 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:53.754007 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:53.847202 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:53.994728 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:54.057241 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:54.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:54.846622 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:51.596668 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:53.597225 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:48:55.347576 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:55.800828 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:48:55.847504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:48:55.860654 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.347301 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:56.472869 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:56.531116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:56.846537 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.346757 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:57.706997 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:57.770163 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:57.847151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.347452 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:58.521177 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:48:58.583041 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:48:58.847501 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.347147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.847527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:48:59.918746 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:48:59.977904 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.192217 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:48:55.597516 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:48:57.597678 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.100741 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:00.333722 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:00.346752 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:00.847514 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.346636 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:01.846688 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.347266 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:02.846533 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.346590 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:03.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.346607 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:04.761472 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:49:04.773029 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:49:04.847157 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:04.857098 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.857138 2149446 retry.go:84] will retry after 7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:04.857185 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:04.879231 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:04.950615 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:02.596826 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:05.096792 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:05.347388 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:05.847467 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.346572 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:06.846691 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.347421 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:07.846544 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.347439 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:08.847592 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:09.847473 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:07.596780 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:09.597714 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:10.346839 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:10.642468 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:10.708302 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.708346 2149446 retry.go:84] will retry after 9.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:10.847493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.346650 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:11.846502 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:11.932894 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:12.347259 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:12.847370 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.346723 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:13.846605 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.347318 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:14.846610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:12.096720 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:14.596604 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:15.276803 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:15.337116 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.337152 2149446 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:15.347320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:15.847453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:16.846547 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.347494 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:17.847450 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.347436 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:18.846663 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.347534 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:19.846595 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:16.596756 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:19.096677 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:20.347375 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:20.550393 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:20.629907 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.629948 2149446 retry.go:84] will retry after 13.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:20.847352 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.347359 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:21.847541 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.346619 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:22.846606 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.347285 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:23.847257 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.346695 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:24.847195 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:21.096800 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:23.596617 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:25.347322 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:25.847511 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.346649 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:26.847539 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.261998 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:27.328263 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.328301 2149446 retry.go:84] will retry after 13.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:27.347392 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:27.847138 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.346635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:28.547391 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:28.610028 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:28.847445 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.346661 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:29.847366 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:25.596765 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:28.096729 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:30.346864 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:30.846648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.346634 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:31.846643 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.346648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:32.846656 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.346602 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:33.846639 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.347535 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:34.494891 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:34.556245 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.556282 2149446 retry.go:84] will retry after 24.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:34.846617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:49:30.596693 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:33.096665 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:35.596759 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	W1222 01:49:38.096842 2142162 node_ready.go:55] error getting node "no-preload-491054" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-491054": dial tcp 192.168.76.2:8443: connect: connection refused
	I1222 01:49:38.596364 2142162 node_ready.go:38] duration metric: took 6m0.000147881s for node "no-preload-491054" to be "Ready" ...
	I1222 01:49:38.599511 2142162 out.go:203] 
	W1222 01:49:38.602362 2142162 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:49:38.602381 2142162 out.go:285] * 
	W1222 01:49:38.604626 2142162 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:49:38.608399 2142162 out.go:203] 
	I1222 01:49:35.347129 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:35.846681 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:36.346565 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:36.846669 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:37.347286 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:37.846895 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:38.346662 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:38.847012 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:39.347115 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:39.846591 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:40.347475 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:40.735554 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:49:40.795474 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:40.795515 2149446 retry.go:84] will retry after 21.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:40.846792 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:41.172727 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:41.231783 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:41.347078 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:41.846609 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:42.346631 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:42.847610 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:43.346556 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:43.847443 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:44.347395 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:44.847490 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:45.346921 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:45.846697 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:46.346758 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:46.847524 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:47.346683 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:47.846830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:47.846917 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:47.875900 2149446 cri.go:96] found id: ""
	I1222 01:49:47.875920 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.875929 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:47.875935 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:47.875999 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:47.911877 2149446 cri.go:96] found id: ""
	I1222 01:49:47.911899 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.911908 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:47.911914 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:47.911982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:47.941469 2149446 cri.go:96] found id: ""
	I1222 01:49:47.941492 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.941500 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:47.941507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:47.941566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:47.967048 2149446 cri.go:96] found id: ""
	I1222 01:49:47.967088 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.967097 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:47.967104 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:47.967180 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:47.995110 2149446 cri.go:96] found id: ""
	I1222 01:49:47.995135 2149446 logs.go:282] 0 containers: []
	W1222 01:49:47.995144 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:47.995150 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:47.995212 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:48.023445 2149446 cri.go:96] found id: ""
	I1222 01:49:48.023513 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.023525 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:48.023533 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:48.023607 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:48.051956 2149446 cri.go:96] found id: ""
	I1222 01:49:48.052023 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.052044 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:48.052063 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:48.052151 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:48.079789 2149446 cri.go:96] found id: ""
	I1222 01:49:48.079817 2149446 logs.go:282] 0 containers: []
	W1222 01:49:48.079827 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:48.079836 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:48.079847 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:48.148881 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:48.148917 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:48.166528 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:48.166562 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:48.230410 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:48.221856    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.222556    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224311    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224847    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.226455    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:48.221856    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.222556    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224311    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.224847    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:48.226455    1920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:48.230487 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:48.230509 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:48.259694 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:48.259729 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:50.788374 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:50.799012 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:50.799080 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:50.824427 2149446 cri.go:96] found id: ""
	I1222 01:49:50.824451 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.824460 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:50.824467 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:50.824526 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:50.857602 2149446 cri.go:96] found id: ""
	I1222 01:49:50.857626 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.857636 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:50.857642 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:50.857700 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:50.886784 2149446 cri.go:96] found id: ""
	I1222 01:49:50.886809 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.886817 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:50.886824 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:50.886881 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:50.913209 2149446 cri.go:96] found id: ""
	I1222 01:49:50.913239 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.913249 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:50.913256 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:50.913362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:50.939349 2149446 cri.go:96] found id: ""
	I1222 01:49:50.939372 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.939381 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:50.939387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:50.939458 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:50.968002 2149446 cri.go:96] found id: ""
	I1222 01:49:50.968068 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.968080 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:50.968092 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:50.968182 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:50.992529 2149446 cri.go:96] found id: ""
	I1222 01:49:50.992596 2149446 logs.go:282] 0 containers: []
	W1222 01:49:50.992618 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:50.992638 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:50.992738 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:51.020072 2149446 cri.go:96] found id: ""
	I1222 01:49:51.020143 2149446 logs.go:282] 0 containers: []
	W1222 01:49:51.020165 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:51.020185 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:51.020224 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:51.085768 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:51.085801 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:51.102878 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:51.102907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:51.169244 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:51.160719    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.161459    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163246    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163762    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.165256    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:51.160719    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.161459    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163246    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.163762    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:51.165256    2031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:51.169268 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:51.169280 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:51.199236 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:51.199275 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:53.726478 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:53.736764 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:53.736833 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:53.761645 2149446 cri.go:96] found id: ""
	I1222 01:49:53.761673 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.761681 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:53.761693 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:53.761752 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:53.790798 2149446 cri.go:96] found id: ""
	I1222 01:49:53.790875 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.790892 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:53.790899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:53.790960 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:53.814874 2149446 cri.go:96] found id: ""
	I1222 01:49:53.814941 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.814956 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:53.814963 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:53.815022 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:53.839840 2149446 cri.go:96] found id: ""
	I1222 01:49:53.839906 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.839930 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:53.839950 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:53.840033 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:53.875780 2149446 cri.go:96] found id: ""
	I1222 01:49:53.875850 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.875870 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:53.875890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:53.875989 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:53.908372 2149446 cri.go:96] found id: ""
	I1222 01:49:53.908451 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.908473 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:53.908519 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:53.908603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:53.937905 2149446 cri.go:96] found id: ""
	I1222 01:49:53.937975 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.937989 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:53.937997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:53.938056 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:53.964017 2149446 cri.go:96] found id: ""
	I1222 01:49:53.964043 2149446 logs.go:282] 0 containers: []
	W1222 01:49:53.964052 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:53.964061 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:53.964072 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:53.980713 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:53.980743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:54.051694 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:54.042644    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.043169    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.044865    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.045545    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.047279    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:54.042644    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.043169    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.044865    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.045545    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:54.047279    2143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:54.051718 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:54.051732 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:54.081680 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:54.081715 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:54.113831 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:54.113858 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:56.682134 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:56.692686 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:56.692755 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:56.718335 2149446 cri.go:96] found id: ""
	I1222 01:49:56.718360 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.718369 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:56.718375 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:56.718433 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:56.750169 2149446 cri.go:96] found id: ""
	I1222 01:49:56.750199 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.750207 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:56.750214 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:56.750289 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:56.775009 2149446 cri.go:96] found id: ""
	I1222 01:49:56.775033 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.775043 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:56.775049 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:56.775118 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:56.800202 2149446 cri.go:96] found id: ""
	I1222 01:49:56.800226 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.800245 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:56.800253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:56.800317 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:56.825535 2149446 cri.go:96] found id: ""
	I1222 01:49:56.825559 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.825567 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:56.825574 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:56.825633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:56.860355 2149446 cri.go:96] found id: ""
	I1222 01:49:56.860379 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.860388 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:56.860395 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:56.860455 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:56.900413 2149446 cri.go:96] found id: ""
	I1222 01:49:56.900481 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.900502 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:56.900520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:56.900608 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:56.930278 2149446 cri.go:96] found id: ""
	I1222 01:49:56.930343 2149446 logs.go:282] 0 containers: []
	W1222 01:49:56.930365 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:56.930381 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:56.930394 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:56.997740 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:56.997776 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:57.016721 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:57.016750 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:49:57.084545 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:49:57.076568    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.077143    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.078617    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.079164    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.080737    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:49:57.076568    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.077143    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.078617    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.079164    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:49:57.080737    2257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:49:57.084566 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:49:57.084580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:49:57.112759 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:49:57.112793 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:49:58.183323 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:49:58.246165 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:49:58.246322 2149446 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:49:59.087252 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:49:59.163517 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:59.163599 2149446 retry.go:84] will retry after 41.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:49:59.646777 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:49:59.657009 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:49:59.657077 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:49:59.682535 2149446 cri.go:96] found id: ""
	I1222 01:49:59.682562 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.682580 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:49:59.682587 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:49:59.682685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:49:59.712155 2149446 cri.go:96] found id: ""
	I1222 01:49:59.712179 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.712194 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:49:59.712201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:49:59.712313 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:49:59.736832 2149446 cri.go:96] found id: ""
	I1222 01:49:59.736859 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.736868 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:49:59.736884 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:49:59.736987 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:49:59.767806 2149446 cri.go:96] found id: ""
	I1222 01:49:59.767874 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.767897 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:49:59.767917 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:49:59.767991 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:49:59.795439 2149446 cri.go:96] found id: ""
	I1222 01:49:59.795504 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.795529 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:49:59.795549 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:49:59.795616 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:49:59.819847 2149446 cri.go:96] found id: ""
	I1222 01:49:59.819882 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.819892 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:49:59.819899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:49:59.819966 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:49:59.850480 2149446 cri.go:96] found id: ""
	I1222 01:49:59.850509 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.850518 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:49:59.850524 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:49:59.850592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:49:59.884120 2149446 cri.go:96] found id: ""
	I1222 01:49:59.884144 2149446 logs.go:282] 0 containers: []
	W1222 01:49:59.884153 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:49:59.884161 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:49:59.884173 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:49:59.957806 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:49:59.957845 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:49:59.974377 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:49:59.974406 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:00.101182 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:00.069859    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.070359    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.072368    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.077118    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.078499    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:00.069859    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.070359    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.072368    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.077118    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:00.078499    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:00.101220 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:00.101234 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:00.179956 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:00.180075 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:02.382663 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:50:02.449495 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:50:02.449536 2149446 retry.go:84] will retry after 44.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:50:02.730117 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:02.740995 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:02.741072 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:02.768772 2149446 cri.go:96] found id: ""
	I1222 01:50:02.768797 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.768805 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:02.768812 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:02.768871 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:02.794287 2149446 cri.go:96] found id: ""
	I1222 01:50:02.794311 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.794319 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:02.794326 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:02.794389 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:02.819466 2149446 cri.go:96] found id: ""
	I1222 01:50:02.819546 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.819568 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:02.819586 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:02.819669 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:02.853024 2149446 cri.go:96] found id: ""
	I1222 01:50:02.853053 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.853062 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:02.853068 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:02.853137 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:02.880476 2149446 cri.go:96] found id: ""
	I1222 01:50:02.880500 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.880509 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:02.880522 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:02.880594 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:02.911828 2149446 cri.go:96] found id: ""
	I1222 01:50:02.911852 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.911861 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:02.911868 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:02.911945 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:02.938488 2149446 cri.go:96] found id: ""
	I1222 01:50:02.938567 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.938589 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:02.938610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:02.938701 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:02.963720 2149446 cri.go:96] found id: ""
	I1222 01:50:02.963747 2149446 logs.go:282] 0 containers: []
	W1222 01:50:02.963756 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:02.963765 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:02.963777 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:03.029961 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:03.030002 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:03.047145 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:03.047175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:03.113634 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:03.105215    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.105984    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.107648    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.108298    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.109821    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:03.105215    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.105984    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.107648    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.108298    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:03.109821    2493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:03.113655 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:03.113667 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:03.143385 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:03.143421 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:05.679417 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:05.690070 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:05.690144 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:05.715488 2149446 cri.go:96] found id: ""
	I1222 01:50:05.715513 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.715531 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:05.715538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:05.715604 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:05.746380 2149446 cri.go:96] found id: ""
	I1222 01:50:05.746456 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.746478 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:05.746492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:05.746570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:05.770990 2149446 cri.go:96] found id: ""
	I1222 01:50:05.771014 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.771023 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:05.771030 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:05.771092 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:05.796198 2149446 cri.go:96] found id: ""
	I1222 01:50:05.796220 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.796228 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:05.796235 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:05.796293 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:05.823560 2149446 cri.go:96] found id: ""
	I1222 01:50:05.823587 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.823595 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:05.823601 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:05.823660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:05.856315 2149446 cri.go:96] found id: ""
	I1222 01:50:05.856349 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.856359 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:05.856365 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:05.856425 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:05.895348 2149446 cri.go:96] found id: ""
	I1222 01:50:05.895375 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.895383 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:05.895390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:05.895452 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:05.924398 2149446 cri.go:96] found id: ""
	I1222 01:50:05.924466 2149446 logs.go:282] 0 containers: []
	W1222 01:50:05.924478 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:05.924487 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:05.924499 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:05.952966 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:05.953003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:05.984569 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:05.984596 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:06.051373 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:06.051410 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:06.069275 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:06.069345 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:06.136767 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:06.127838    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.128596    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.130370    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.131018    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.132875    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:06.127838    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.128596    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.130370    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.131018    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:06.132875    2618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:08.637440 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:08.649276 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:08.649375 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:08.676353 2149446 cri.go:96] found id: ""
	I1222 01:50:08.676377 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.676386 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:08.676393 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:08.676457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:08.702672 2149446 cri.go:96] found id: ""
	I1222 01:50:08.702696 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.702705 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:08.702712 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:08.702773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:08.733007 2149446 cri.go:96] found id: ""
	I1222 01:50:08.733029 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.733038 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:08.733044 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:08.733108 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:08.758339 2149446 cri.go:96] found id: ""
	I1222 01:50:08.758363 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.758371 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:08.758378 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:08.758437 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:08.783187 2149446 cri.go:96] found id: ""
	I1222 01:50:08.783210 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.783219 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:08.783225 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:08.783290 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:08.813005 2149446 cri.go:96] found id: ""
	I1222 01:50:08.813028 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.813036 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:08.813042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:08.813102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:08.840050 2149446 cri.go:96] found id: ""
	I1222 01:50:08.840073 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.840081 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:08.840087 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:08.840232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:08.879901 2149446 cri.go:96] found id: ""
	I1222 01:50:08.879976 2149446 logs.go:282] 0 containers: []
	W1222 01:50:08.880013 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:08.880039 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:08.880067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:08.955189 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:08.945414    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.946257    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948041    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948826    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.950714    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:08.945414    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.946257    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948041    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.948826    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:08.950714    2715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:08.955272 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:08.955297 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:08.983942 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:08.983976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:09.023107 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:09.023137 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:09.089046 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:09.089084 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:11.605972 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:11.618258 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:11.618329 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:11.663499 2149446 cri.go:96] found id: ""
	I1222 01:50:11.663523 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.663531 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:11.663538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:11.663603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:11.693496 2149446 cri.go:96] found id: ""
	I1222 01:50:11.693524 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.693533 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:11.693539 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:11.693610 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:11.719147 2149446 cri.go:96] found id: ""
	I1222 01:50:11.719173 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.719183 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:11.719189 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:11.719256 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:11.744820 2149446 cri.go:96] found id: ""
	I1222 01:50:11.744845 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.744854 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:11.744860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:11.744919 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:11.770539 2149446 cri.go:96] found id: ""
	I1222 01:50:11.770566 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.770575 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:11.770582 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:11.770643 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:11.796499 2149446 cri.go:96] found id: ""
	I1222 01:50:11.796522 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.796531 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:11.796538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:11.796596 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:11.821798 2149446 cri.go:96] found id: ""
	I1222 01:50:11.821822 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.821831 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:11.821837 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:11.821901 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:11.855646 2149446 cri.go:96] found id: ""
	I1222 01:50:11.855668 2149446 logs.go:282] 0 containers: []
	W1222 01:50:11.855677 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:11.855685 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:11.855697 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:11.897110 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:11.897191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:11.968365 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:11.968400 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:11.985862 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:11.985890 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:12.057003 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:12.047720    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.048930    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.049816    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.050506    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.052441    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:12.047720    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.048930    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.049816    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.050506    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:12.052441    2847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:12.057028 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:12.057044 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:14.587717 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:14.601824 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:14.601894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:14.633615 2149446 cri.go:96] found id: ""
	I1222 01:50:14.633639 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.633651 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:14.633667 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:14.633734 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:14.665995 2149446 cri.go:96] found id: ""
	I1222 01:50:14.666024 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.666034 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:14.666040 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:14.666104 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:14.694750 2149446 cri.go:96] found id: ""
	I1222 01:50:14.694832 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.694846 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:14.694854 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:14.694914 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:14.721367 2149446 cri.go:96] found id: ""
	I1222 01:50:14.721391 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.721401 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:14.721407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:14.721465 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:14.746783 2149446 cri.go:96] found id: ""
	I1222 01:50:14.746864 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.746880 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:14.746887 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:14.746951 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:14.776055 2149446 cri.go:96] found id: ""
	I1222 01:50:14.776078 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.776087 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:14.776094 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:14.776154 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:14.802137 2149446 cri.go:96] found id: ""
	I1222 01:50:14.802162 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.802171 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:14.802178 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:14.802236 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:14.827099 2149446 cri.go:96] found id: ""
	I1222 01:50:14.827176 2149446 logs.go:282] 0 containers: []
	W1222 01:50:14.827192 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:14.827203 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:14.827215 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:14.843942 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:14.843972 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:14.924047 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:14.915431    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.916077    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918044    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918580    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.920233    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:14.915431    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.916077    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918044    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.918580    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:14.920233    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:14.924081 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:14.924094 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:14.952630 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:14.952666 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:14.981965 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:14.981992 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:17.554296 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:17.564949 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:17.565018 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:17.596428 2149446 cri.go:96] found id: ""
	I1222 01:50:17.596451 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.596465 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:17.596472 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:17.596531 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:17.629782 2149446 cri.go:96] found id: ""
	I1222 01:50:17.629804 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.629812 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:17.629820 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:17.629877 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:17.658278 2149446 cri.go:96] found id: ""
	I1222 01:50:17.658305 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.658313 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:17.658319 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:17.658384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:17.684159 2149446 cri.go:96] found id: ""
	I1222 01:50:17.684227 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.684248 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:17.684268 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:17.684395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:17.712612 2149446 cri.go:96] found id: ""
	I1222 01:50:17.712682 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.712713 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:17.712751 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:17.712835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:17.740669 2149446 cri.go:96] found id: ""
	I1222 01:50:17.740737 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.740758 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:17.740779 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:17.740868 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:17.767931 2149446 cri.go:96] found id: ""
	I1222 01:50:17.767998 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.768021 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:17.768042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:17.768129 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:17.793937 2149446 cri.go:96] found id: ""
	I1222 01:50:17.794002 2149446 logs.go:282] 0 containers: []
	W1222 01:50:17.794015 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:17.794025 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:17.794037 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:17.860940 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:17.860976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:17.885026 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:17.885106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:17.949991 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:17.942003    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.942521    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944226    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944614    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.946054    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:17.942003    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.942521    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944226    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.944614    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:17.946054    3063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:17.950015 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:17.950028 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:17.978296 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:17.978329 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:20.513236 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:20.523539 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:20.523613 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:20.548569 2149446 cri.go:96] found id: ""
	I1222 01:50:20.548592 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.548600 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:20.548607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:20.548670 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:20.573066 2149446 cri.go:96] found id: ""
	I1222 01:50:20.573145 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.573167 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:20.573185 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:20.573276 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:20.604633 2149446 cri.go:96] found id: ""
	I1222 01:50:20.604659 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.604668 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:20.604675 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:20.604735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:20.635610 2149446 cri.go:96] found id: ""
	I1222 01:50:20.635636 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.635645 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:20.635651 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:20.635717 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:20.670374 2149446 cri.go:96] found id: ""
	I1222 01:50:20.670441 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.670455 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:20.670462 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:20.670524 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:20.699800 2149446 cri.go:96] found id: ""
	I1222 01:50:20.699822 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.699830 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:20.699837 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:20.699894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:20.728024 2149446 cri.go:96] found id: ""
	I1222 01:50:20.728048 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.728057 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:20.728063 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:20.728122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:20.758174 2149446 cri.go:96] found id: ""
	I1222 01:50:20.758198 2149446 logs.go:282] 0 containers: []
	W1222 01:50:20.758206 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:20.758215 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:20.758227 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:20.823612 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:20.823650 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:20.840051 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:20.840081 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:20.925259 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:20.915370    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.916028    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.918122    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.919738    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.921230    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:20.915370    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.916028    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.918122    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.919738    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:20.921230    3179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:20.925294 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:20.925307 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:20.955201 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:20.955234 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:23.483497 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:23.494221 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:23.494298 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:23.523189 2149446 cri.go:96] found id: ""
	I1222 01:50:23.523214 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.523223 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:23.523229 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:23.523289 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:23.550633 2149446 cri.go:96] found id: ""
	I1222 01:50:23.550656 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.550666 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:23.550672 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:23.550730 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:23.576289 2149446 cri.go:96] found id: ""
	I1222 01:50:23.576312 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.576328 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:23.576335 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:23.576395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:23.611281 2149446 cri.go:96] found id: ""
	I1222 01:50:23.611304 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.611312 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:23.611320 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:23.611381 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:23.640498 2149446 cri.go:96] found id: ""
	I1222 01:50:23.640520 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.640529 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:23.640535 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:23.640595 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:23.674625 2149446 cri.go:96] found id: ""
	I1222 01:50:23.674649 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.674658 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:23.674665 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:23.674723 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:23.699746 2149446 cri.go:96] found id: ""
	I1222 01:50:23.699771 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.699780 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:23.699786 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:23.699849 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:23.724314 2149446 cri.go:96] found id: ""
	I1222 01:50:23.724345 2149446 logs.go:282] 0 containers: []
	W1222 01:50:23.724353 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:23.724369 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:23.724381 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:23.740816 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:23.740846 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:23.813467 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:23.804843    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.805658    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807401    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807908    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.809520    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:23.804843    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.805658    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807401    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.807908    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:23.809520    3290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:23.813488 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:23.813501 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:23.844492 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:23.844529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:23.883925 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:23.884003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:26.466430 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:26.480754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:26.480834 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:26.505742 2149446 cri.go:96] found id: ""
	I1222 01:50:26.505825 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.505856 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:26.505876 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:26.505986 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:26.532705 2149446 cri.go:96] found id: ""
	I1222 01:50:26.532781 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.532816 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:26.532839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:26.532932 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:26.557032 2149446 cri.go:96] found id: ""
	I1222 01:50:26.557116 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.557139 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:26.557170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:26.557247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:26.583725 2149446 cri.go:96] found id: ""
	I1222 01:50:26.583812 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.583835 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:26.583853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:26.583962 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:26.617024 2149446 cri.go:96] found id: ""
	I1222 01:50:26.617089 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.617121 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:26.617140 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:26.617246 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:26.646386 2149446 cri.go:96] found id: ""
	I1222 01:50:26.646464 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.646488 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:26.646507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:26.646612 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:26.678246 2149446 cri.go:96] found id: ""
	I1222 01:50:26.678269 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.678278 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:26.678285 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:26.678347 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:26.703982 2149446 cri.go:96] found id: ""
	I1222 01:50:26.704008 2149446 logs.go:282] 0 containers: []
	W1222 01:50:26.704017 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:26.704026 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:26.704042 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:26.720933 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:26.720994 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:26.786938 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:26.778772    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.779363    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781037    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781428    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.782886    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:26.778772    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.779363    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781037    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.781428    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:26.782886    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:26.786961 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:26.786974 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:26.815331 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:26.815367 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:26.845602 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:26.845630 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:29.422527 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:29.433711 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:29.433793 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:29.465497 2149446 cri.go:96] found id: ""
	I1222 01:50:29.465548 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.465558 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:29.465564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:29.465634 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:29.490894 2149446 cri.go:96] found id: ""
	I1222 01:50:29.490919 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.490927 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:29.490938 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:29.490998 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:29.520626 2149446 cri.go:96] found id: ""
	I1222 01:50:29.520661 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.520670 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:29.520677 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:29.520747 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:29.547848 2149446 cri.go:96] found id: ""
	I1222 01:50:29.547885 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.547893 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:29.547900 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:29.547982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:29.577023 2149446 cri.go:96] found id: ""
	I1222 01:50:29.577048 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.577057 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:29.577065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:29.577127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:29.611778 2149446 cri.go:96] found id: ""
	I1222 01:50:29.611814 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.611823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:29.611830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:29.611903 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:29.643842 2149446 cri.go:96] found id: ""
	I1222 01:50:29.643918 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.643941 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:29.643958 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:29.644044 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:29.674585 2149446 cri.go:96] found id: ""
	I1222 01:50:29.674658 2149446 logs.go:282] 0 containers: []
	W1222 01:50:29.674680 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:29.674704 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:29.674743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:29.703700 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:29.703743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:29.735551 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:29.735577 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:29.800624 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:29.800662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:29.817316 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:29.817364 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:29.907057 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:29.895787    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.896652    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898306    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898827    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.902956    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:29.895787    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.896652    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898306    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.898827    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:29.902956    3533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:32.407335 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:32.417713 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:32.417782 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:32.443620 2149446 cri.go:96] found id: ""
	I1222 01:50:32.443648 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.443656 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:32.443662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:32.443722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:32.473803 2149446 cri.go:96] found id: ""
	I1222 01:50:32.473831 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.473840 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:32.473847 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:32.473908 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:32.503900 2149446 cri.go:96] found id: ""
	I1222 01:50:32.503928 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.503936 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:32.503943 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:32.504007 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:32.533317 2149446 cri.go:96] found id: ""
	I1222 01:50:32.533427 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.533436 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:32.533442 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:32.533503 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:32.558269 2149446 cri.go:96] found id: ""
	I1222 01:50:32.558292 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.558300 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:32.558307 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:32.558366 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:32.583954 2149446 cri.go:96] found id: ""
	I1222 01:50:32.583976 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.583984 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:32.583990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:32.584048 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:32.619938 2149446 cri.go:96] found id: ""
	I1222 01:50:32.619959 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.619967 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:32.619974 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:32.620032 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:32.646855 2149446 cri.go:96] found id: ""
	I1222 01:50:32.646876 2149446 logs.go:282] 0 containers: []
	W1222 01:50:32.646883 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:32.646892 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:32.646903 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:32.681292 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:32.681331 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:32.709454 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:32.709486 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:32.774478 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:32.774512 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:32.790947 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:32.790977 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:32.865716 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:32.856728    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.857643    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.858424    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.859539    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.860242    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:32.856728    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.857643    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.858424    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.859539    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:32.860242    3647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:35.366405 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:35.377118 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:35.377211 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:35.405034 2149446 cri.go:96] found id: ""
	I1222 01:50:35.405073 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.405083 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:35.405089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:35.405172 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:35.429958 2149446 cri.go:96] found id: ""
	I1222 01:50:35.429985 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.429994 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:35.430002 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:35.430063 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:35.455069 2149446 cri.go:96] found id: ""
	I1222 01:50:35.455094 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.455103 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:35.455112 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:35.455173 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:35.482985 2149446 cri.go:96] found id: ""
	I1222 01:50:35.483053 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.483075 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:35.483089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:35.483165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:35.506805 2149446 cri.go:96] found id: ""
	I1222 01:50:35.506874 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.506894 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:35.506909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:35.506983 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:35.532897 2149446 cri.go:96] found id: ""
	I1222 01:50:35.532925 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.532934 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:35.532941 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:35.533002 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:35.564747 2149446 cri.go:96] found id: ""
	I1222 01:50:35.564774 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.564783 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:35.564789 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:35.564858 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:35.589912 2149446 cri.go:96] found id: ""
	I1222 01:50:35.589939 2149446 logs.go:282] 0 containers: []
	W1222 01:50:35.589948 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:35.589956 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:35.589969 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:35.665301 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:35.665441 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:35.682348 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:35.682377 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:35.749489 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:35.741663    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.742347    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.743818    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.744264    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.745698    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:35.741663    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.742347    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.743818    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.744264    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:35.745698    3747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:35.749511 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:35.749524 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:35.776750 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:35.776782 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:38.306113 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:38.316656 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:38.316729 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:38.343191 2149446 cri.go:96] found id: ""
	I1222 01:50:38.343229 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.343238 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:38.343245 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:38.343311 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:38.371584 2149446 cri.go:96] found id: ""
	I1222 01:50:38.371609 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.371618 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:38.371625 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:38.371686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:38.399378 2149446 cri.go:96] found id: ""
	I1222 01:50:38.399401 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.399410 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:38.399416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:38.399476 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:38.424466 2149446 cri.go:96] found id: ""
	I1222 01:50:38.424492 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.424500 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:38.424513 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:38.424575 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:38.449419 2149446 cri.go:96] found id: ""
	I1222 01:50:38.449452 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.449461 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:38.449468 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:38.449533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:38.475321 2149446 cri.go:96] found id: ""
	I1222 01:50:38.475347 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.475356 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:38.475363 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:38.475423 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:38.499428 2149446 cri.go:96] found id: ""
	I1222 01:50:38.499458 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.499469 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:38.499476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:38.499541 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:38.524738 2149446 cri.go:96] found id: ""
	I1222 01:50:38.524766 2149446 logs.go:282] 0 containers: []
	W1222 01:50:38.524780 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:38.524789 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:38.524800 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:38.557984 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:38.558022 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:38.589548 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:38.589577 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:38.671416 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:38.671453 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:38.688135 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:38.688164 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:38.751396 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:38.743333    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.744008    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.745568    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.746050    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.747565    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:38.743333    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.744008    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.745568    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.746050    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:38.747565    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:40.899829 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:50:40.960744 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:50:40.960855 2149446 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:50:41.252211 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:41.263048 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:41.263123 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:41.296135 2149446 cri.go:96] found id: ""
	I1222 01:50:41.296159 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.296167 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:41.296174 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:41.296232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:41.322198 2149446 cri.go:96] found id: ""
	I1222 01:50:41.322277 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.322291 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:41.322299 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:41.322360 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:41.348725 2149446 cri.go:96] found id: ""
	I1222 01:50:41.348750 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.348760 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:41.348766 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:41.348825 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:41.374926 2149446 cri.go:96] found id: ""
	I1222 01:50:41.375005 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.375021 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:41.375029 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:41.375096 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:41.406446 2149446 cri.go:96] found id: ""
	I1222 01:50:41.406537 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.406556 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:41.406566 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:41.406648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:41.432212 2149446 cri.go:96] found id: ""
	I1222 01:50:41.432239 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.432247 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:41.432254 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:41.432321 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:41.463730 2149446 cri.go:96] found id: ""
	I1222 01:50:41.463755 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.463765 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:41.463771 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:41.463833 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:41.490157 2149446 cri.go:96] found id: ""
	I1222 01:50:41.490182 2149446 logs.go:282] 0 containers: []
	W1222 01:50:41.490193 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:41.490201 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:41.490213 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:41.558943 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:41.558981 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:41.576405 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:41.576435 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:41.666063 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:41.657908    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.658547    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660067    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660654    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.662156    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:41.657908    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.658547    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660067    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.660654    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:41.662156    3973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:41.666083 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:41.666095 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:41.696072 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:41.696105 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:44.226469 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:44.236983 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:44.237055 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:44.261867 2149446 cri.go:96] found id: ""
	I1222 01:50:44.261892 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.261900 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:44.261907 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:44.261971 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:44.287438 2149446 cri.go:96] found id: ""
	I1222 01:50:44.287462 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.287471 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:44.287478 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:44.287536 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:44.313755 2149446 cri.go:96] found id: ""
	I1222 01:50:44.313779 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.313788 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:44.313794 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:44.313861 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:44.347563 2149446 cri.go:96] found id: ""
	I1222 01:50:44.347589 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.347597 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:44.347604 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:44.347688 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:44.377289 2149446 cri.go:96] found id: ""
	I1222 01:50:44.377313 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.377354 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:44.377362 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:44.377443 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:44.403181 2149446 cri.go:96] found id: ""
	I1222 01:50:44.403207 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.403215 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:44.403222 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:44.403280 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:44.432425 2149446 cri.go:96] found id: ""
	I1222 01:50:44.432449 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.432457 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:44.432465 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:44.432529 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:44.458001 2149446 cri.go:96] found id: ""
	I1222 01:50:44.458030 2149446 logs.go:282] 0 containers: []
	W1222 01:50:44.458039 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:44.458048 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:44.458089 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:44.485816 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:44.485844 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:44.552935 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:44.552970 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:44.569824 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:44.569852 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:44.652688 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:44.644560    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.645403    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647098    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647385    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.648885    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:44.644560    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.645403    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647098    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.647385    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:44.648885    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:44.652760 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:44.652785 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:46.992640 2149446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:50:47.054063 2149446 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:50:47.054189 2149446 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:50:47.057341 2149446 out.go:179] * Enabled addons: 
	I1222 01:50:47.060205 2149446 addons.go:530] duration metric: took 1m59.289939725s for enable addons: enabled=[]
	I1222 01:50:47.182147 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:47.192818 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:47.192903 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:47.224024 2149446 cri.go:96] found id: ""
	I1222 01:50:47.224046 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.224055 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:47.224061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:47.224122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:47.250330 2149446 cri.go:96] found id: ""
	I1222 01:50:47.250357 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.250365 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:47.250372 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:47.250431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:47.275271 2149446 cri.go:96] found id: ""
	I1222 01:50:47.275294 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.275302 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:47.275309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:47.275370 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:47.300369 2149446 cri.go:96] found id: ""
	I1222 01:50:47.300394 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.300403 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:47.300409 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:47.300468 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:47.325147 2149446 cri.go:96] found id: ""
	I1222 01:50:47.325213 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.325234 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:47.325253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:47.325362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:47.353926 2149446 cri.go:96] found id: ""
	I1222 01:50:47.353953 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.353961 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:47.353968 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:47.354028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:47.378935 2149446 cri.go:96] found id: ""
	I1222 01:50:47.378961 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.378970 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:47.378977 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:47.379040 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:47.404019 2149446 cri.go:96] found id: ""
	I1222 01:50:47.404042 2149446 logs.go:282] 0 containers: []
	W1222 01:50:47.404051 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:47.404061 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:47.404078 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:47.473731 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:47.473766 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:47.490597 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:47.490625 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:47.557658 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:47.550047    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.550549    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552046    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552472    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.553897    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:47.550047    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.550549    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552046    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.552472    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:47.553897    4202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:47.557681 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:47.557694 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:47.586853 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:47.586888 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:50.125437 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:50.138125 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:50.138195 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:50.166675 2149446 cri.go:96] found id: ""
	I1222 01:50:50.166698 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.166706 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:50.166712 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:50.166775 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:50.195252 2149446 cri.go:96] found id: ""
	I1222 01:50:50.195277 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.195285 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:50.195293 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:50.195354 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:50.221357 2149446 cri.go:96] found id: ""
	I1222 01:50:50.221390 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.221406 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:50.221413 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:50.221504 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:50.247114 2149446 cri.go:96] found id: ""
	I1222 01:50:50.247137 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.247145 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:50.247163 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:50.247221 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:50.272457 2149446 cri.go:96] found id: ""
	I1222 01:50:50.272482 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.272491 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:50.272497 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:50.272558 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:50.302435 2149446 cri.go:96] found id: ""
	I1222 01:50:50.302464 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.302473 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:50.302479 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:50.302554 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:50.325940 2149446 cri.go:96] found id: ""
	I1222 01:50:50.326010 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.326025 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:50.326033 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:50.326095 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:50.355131 2149446 cri.go:96] found id: ""
	I1222 01:50:50.355156 2149446 logs.go:282] 0 containers: []
	W1222 01:50:50.355167 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:50.355176 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:50.355219 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:50.421016 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:50.421050 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:50.438999 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:50.439026 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:50.505148 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:50.497150    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.497999    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499544    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499840    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.501315    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:50.497150    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.497999    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499544    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.499840    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:50.501315    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:50.505167 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:50.505178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:50.533117 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:50.533151 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:53.063902 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:53.074689 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:53.074763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:53.105072 2149446 cri.go:96] found id: ""
	I1222 01:50:53.105098 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.105108 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:53.105114 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:53.105178 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:53.135898 2149446 cri.go:96] found id: ""
	I1222 01:50:53.135924 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.135933 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:53.135939 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:53.135999 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:53.167436 2149446 cri.go:96] found id: ""
	I1222 01:50:53.167461 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.167469 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:53.167476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:53.167559 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:53.194408 2149446 cri.go:96] found id: ""
	I1222 01:50:53.194523 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.194547 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:53.194566 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:53.194648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:53.219170 2149446 cri.go:96] found id: ""
	I1222 01:50:53.219232 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.219246 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:53.219259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:53.219319 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:53.245034 2149446 cri.go:96] found id: ""
	I1222 01:50:53.245059 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.245067 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:53.245074 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:53.245131 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:53.271853 2149446 cri.go:96] found id: ""
	I1222 01:50:53.271884 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.271893 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:53.271899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:53.271970 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:53.296192 2149446 cri.go:96] found id: ""
	I1222 01:50:53.296225 2149446 logs.go:282] 0 containers: []
	W1222 01:50:53.296234 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:53.296243 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:53.296259 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:53.363334 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:53.363372 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:53.381487 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:53.381517 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:53.448549 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:53.440319    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.440874    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.442515    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.443004    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.444491    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:53.440319    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.440874    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.442515    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.443004    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:53.444491    4428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:53.448571 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:53.448583 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:53.481685 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:53.481727 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:56.012872 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:56.025767 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:56.025845 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:56.053844 2149446 cri.go:96] found id: ""
	I1222 01:50:56.053873 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.053882 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:56.053890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:56.053961 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:56.082849 2149446 cri.go:96] found id: ""
	I1222 01:50:56.082873 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.082882 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:56.082888 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:56.082964 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:56.116254 2149446 cri.go:96] found id: ""
	I1222 01:50:56.116336 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.116359 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:56.116379 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:56.116489 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:56.157611 2149446 cri.go:96] found id: ""
	I1222 01:50:56.157636 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.157644 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:56.157662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:56.157721 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:56.183024 2149446 cri.go:96] found id: ""
	I1222 01:50:56.183102 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.183123 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:56.183137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:56.183214 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:56.211789 2149446 cri.go:96] found id: ""
	I1222 01:50:56.211814 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.211823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:56.211830 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:56.211888 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:56.238212 2149446 cri.go:96] found id: ""
	I1222 01:50:56.238286 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.238309 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:56.238332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:56.238421 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:56.269053 2149446 cri.go:96] found id: ""
	I1222 01:50:56.269079 2149446 logs.go:282] 0 containers: []
	W1222 01:50:56.269088 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:56.269097 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:56.269110 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:56.339489 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:56.339527 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:56.356488 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:56.356627 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:56.423568 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:56.415753    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.416507    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418088    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418397    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.419880    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:56.415753    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.416507    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418088    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.418397    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:56.419880    4545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:56.423639 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:56.423664 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:56.452827 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:56.452859 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:50:58.984493 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:50:58.994909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:50:58.994982 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:50:59.022329 2149446 cri.go:96] found id: ""
	I1222 01:50:59.022354 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.022363 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:50:59.022369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:50:59.022429 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:50:59.047878 2149446 cri.go:96] found id: ""
	I1222 01:50:59.047901 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.047909 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:50:59.047915 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:50:59.047975 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:50:59.072596 2149446 cri.go:96] found id: ""
	I1222 01:50:59.072622 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.072631 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:50:59.072637 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:50:59.072696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:50:59.104362 2149446 cri.go:96] found id: ""
	I1222 01:50:59.104391 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.104400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:50:59.104406 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:50:59.104470 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:50:59.143935 2149446 cri.go:96] found id: ""
	I1222 01:50:59.143960 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.143970 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:50:59.143977 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:50:59.144037 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:50:59.172279 2149446 cri.go:96] found id: ""
	I1222 01:50:59.172305 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.172314 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:50:59.172321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:50:59.172390 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:50:59.197781 2149446 cri.go:96] found id: ""
	I1222 01:50:59.197804 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.197813 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:50:59.197819 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:50:59.197879 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:50:59.223417 2149446 cri.go:96] found id: ""
	I1222 01:50:59.223441 2149446 logs.go:282] 0 containers: []
	W1222 01:50:59.223450 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:50:59.223459 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:50:59.223470 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:50:59.289154 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:50:59.289191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:50:59.306341 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:50:59.306372 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:50:59.371286 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:50:59.363578    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.364246    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.365700    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.366151    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.367566    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:50:59.363578    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.364246    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.365700    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.366151    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:50:59.367566    4655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:50:59.371309 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:50:59.371321 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:50:59.399863 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:50:59.399898 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:01.927120 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:01.939501 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:01.939576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:01.967293 2149446 cri.go:96] found id: ""
	I1222 01:51:01.967326 2149446 logs.go:282] 0 containers: []
	W1222 01:51:01.967336 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:01.967343 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:01.967430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:02.010081 2149446 cri.go:96] found id: ""
	I1222 01:51:02.010124 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.010134 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:02.010141 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:02.010231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:02.037304 2149446 cri.go:96] found id: ""
	I1222 01:51:02.037354 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.037363 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:02.037369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:02.037430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:02.065893 2149446 cri.go:96] found id: ""
	I1222 01:51:02.065917 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.065926 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:02.065932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:02.065992 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:02.098337 2149446 cri.go:96] found id: ""
	I1222 01:51:02.098366 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.098375 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:02.098383 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:02.098448 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:02.137900 2149446 cri.go:96] found id: ""
	I1222 01:51:02.137929 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.137939 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:02.137946 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:02.138104 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:02.170824 2149446 cri.go:96] found id: ""
	I1222 01:51:02.170852 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.170861 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:02.170868 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:02.170947 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:02.206007 2149446 cri.go:96] found id: ""
	I1222 01:51:02.206032 2149446 logs.go:282] 0 containers: []
	W1222 01:51:02.206041 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:02.206050 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:02.206062 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:02.273536 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:02.273574 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:02.290756 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:02.290788 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:02.358784 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:02.350805    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.351393    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.352892    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.353378    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.355003    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:02.350805    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.351393    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.352892    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.353378    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:02.355003    4762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:02.358806 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:02.358821 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:02.386732 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:02.386772 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:04.916635 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:04.927272 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:04.927352 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:04.952318 2149446 cri.go:96] found id: ""
	I1222 01:51:04.952350 2149446 logs.go:282] 0 containers: []
	W1222 01:51:04.952358 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:04.952364 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:04.952424 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:04.980287 2149446 cri.go:96] found id: ""
	I1222 01:51:04.980309 2149446 logs.go:282] 0 containers: []
	W1222 01:51:04.980317 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:04.980338 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:04.980398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:05.011897 2149446 cri.go:96] found id: ""
	I1222 01:51:05.011975 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.011998 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:05.012019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:05.012114 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:05.039439 2149446 cri.go:96] found id: ""
	I1222 01:51:05.039464 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.039473 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:05.039480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:05.039562 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:05.065276 2149446 cri.go:96] found id: ""
	I1222 01:51:05.065309 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.065318 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:05.065352 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:05.065421 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:05.104171 2149446 cri.go:96] found id: ""
	I1222 01:51:05.104250 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.104272 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:05.104293 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:05.104397 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:05.140837 2149446 cri.go:96] found id: ""
	I1222 01:51:05.140872 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.140882 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:05.140889 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:05.140962 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:05.174413 2149446 cri.go:96] found id: ""
	I1222 01:51:05.174439 2149446 logs.go:282] 0 containers: []
	W1222 01:51:05.174448 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:05.174457 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:05.174469 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:05.241273 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:05.232831    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.233309    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235135    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235674    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.237437    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:05.232831    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.233309    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235135    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.235674    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:05.237437    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:05.241367 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:05.241395 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:05.269917 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:05.269947 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:05.298262 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:05.298289 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:05.366093 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:05.366127 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:07.884446 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:07.897419 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:07.897506 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:07.923443 2149446 cri.go:96] found id: ""
	I1222 01:51:07.923466 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.923474 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:07.923481 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:07.923538 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:07.949573 2149446 cri.go:96] found id: ""
	I1222 01:51:07.949595 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.949604 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:07.949610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:07.949672 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:07.974625 2149446 cri.go:96] found id: ""
	I1222 01:51:07.974648 2149446 logs.go:282] 0 containers: []
	W1222 01:51:07.974657 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:07.974664 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:07.974722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:08.000701 2149446 cri.go:96] found id: ""
	I1222 01:51:08.000729 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.000738 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:08.000746 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:08.000823 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:08.030073 2149446 cri.go:96] found id: ""
	I1222 01:51:08.030098 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.030106 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:08.030113 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:08.030184 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:08.056629 2149446 cri.go:96] found id: ""
	I1222 01:51:08.056665 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.056675 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:08.056682 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:08.056742 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:08.086264 2149446 cri.go:96] found id: ""
	I1222 01:51:08.086299 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.086308 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:08.086320 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:08.086393 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:08.116679 2149446 cri.go:96] found id: ""
	I1222 01:51:08.116709 2149446 logs.go:282] 0 containers: []
	W1222 01:51:08.116719 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:08.116728 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:08.116748 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:08.157494 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:08.157521 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:08.233683 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:08.233771 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:08.250905 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:08.250934 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:08.316451 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:08.307255    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.308718    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.309622    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311377    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311630    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:08.307255    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.308718    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.309622    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311377    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:08.311630    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:08.316472 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:08.316484 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:10.846347 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:10.861517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:10.861699 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:10.899351 2149446 cri.go:96] found id: ""
	I1222 01:51:10.899458 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.899504 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:10.899543 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:10.899659 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:10.929750 2149446 cri.go:96] found id: ""
	I1222 01:51:10.929818 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.929839 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:10.929851 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:10.929929 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:10.960575 2149446 cri.go:96] found id: ""
	I1222 01:51:10.960610 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.960619 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:10.960626 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:10.960698 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:10.985907 2149446 cri.go:96] found id: ""
	I1222 01:51:10.985932 2149446 logs.go:282] 0 containers: []
	W1222 01:51:10.985940 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:10.985947 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:10.986010 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:11.014799 2149446 cri.go:96] found id: ""
	I1222 01:51:11.014878 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.014893 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:11.014901 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:11.014965 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:11.040928 2149446 cri.go:96] found id: ""
	I1222 01:51:11.040960 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.040970 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:11.040976 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:11.041078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:11.067246 2149446 cri.go:96] found id: ""
	I1222 01:51:11.067327 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.067351 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:11.067365 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:11.067442 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:11.095605 2149446 cri.go:96] found id: ""
	I1222 01:51:11.095631 2149446 logs.go:282] 0 containers: []
	W1222 01:51:11.095640 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:11.095650 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:11.095662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:11.169987 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:11.170076 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:11.186770 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:11.186800 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:11.248524 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:11.240323    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.241076    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242336    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242961    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.244559    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:11.240323    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.241076    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242336    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.242961    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:11.244559    5101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:11.248546 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:11.248559 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:11.276178 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:11.276215 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:13.807549 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:13.818390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:13.818463 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:13.844700 2149446 cri.go:96] found id: ""
	I1222 01:51:13.844722 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.844730 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:13.844736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:13.844795 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:13.875626 2149446 cri.go:96] found id: ""
	I1222 01:51:13.875648 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.875656 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:13.875662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:13.875727 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:13.904646 2149446 cri.go:96] found id: ""
	I1222 01:51:13.904667 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.904676 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:13.904682 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:13.904740 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:13.933165 2149446 cri.go:96] found id: ""
	I1222 01:51:13.933241 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.933263 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:13.933282 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:13.933402 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:13.961468 2149446 cri.go:96] found id: ""
	I1222 01:51:13.961492 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.961501 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:13.961507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:13.961576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:13.985666 2149446 cri.go:96] found id: ""
	I1222 01:51:13.985689 2149446 logs.go:282] 0 containers: []
	W1222 01:51:13.985697 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:13.985704 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:13.985760 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:14.018377 2149446 cri.go:96] found id: ""
	I1222 01:51:14.018402 2149446 logs.go:282] 0 containers: []
	W1222 01:51:14.018411 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:14.018417 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:14.018484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:14.044854 2149446 cri.go:96] found id: ""
	I1222 01:51:14.044879 2149446 logs.go:282] 0 containers: []
	W1222 01:51:14.044888 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:14.044897 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:14.044933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:14.110126 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:14.110206 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:14.128030 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:14.128108 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:14.196957 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:14.188697    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.189523    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191343    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191689    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.193153    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:14.188697    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.189523    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191343    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.191689    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:14.193153    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:14.197029 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:14.197056 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:14.226166 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:14.226203 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:16.756369 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:16.767083 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:16.767155 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:16.793678 2149446 cri.go:96] found id: ""
	I1222 01:51:16.793704 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.793712 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:16.793719 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:16.793779 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:16.819158 2149446 cri.go:96] found id: ""
	I1222 01:51:16.819183 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.819192 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:16.819198 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:16.819260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:16.848081 2149446 cri.go:96] found id: ""
	I1222 01:51:16.848103 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.848112 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:16.848119 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:16.848178 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:16.889132 2149446 cri.go:96] found id: ""
	I1222 01:51:16.889210 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.889233 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:16.889253 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:16.889357 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:16.922581 2149446 cri.go:96] found id: ""
	I1222 01:51:16.922656 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.922677 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:16.922697 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:16.922788 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:16.947759 2149446 cri.go:96] found id: ""
	I1222 01:51:16.947833 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.947855 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:16.947876 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:16.947959 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:16.973072 2149446 cri.go:96] found id: ""
	I1222 01:51:16.973106 2149446 logs.go:282] 0 containers: []
	W1222 01:51:16.973115 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:16.973138 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:16.973219 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:17.003269 2149446 cri.go:96] found id: ""
	I1222 01:51:17.003303 2149446 logs.go:282] 0 containers: []
	W1222 01:51:17.003328 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:17.003339 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:17.003371 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:17.070360 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:17.070398 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:17.087501 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:17.087530 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:17.166797 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:17.158419    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.159432    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161029    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161376    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.162807    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:17.158419    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.159432    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161029    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.161376    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:17.162807    5329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:17.166818 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:17.166830 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:17.196986 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:17.197018 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:19.726818 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:19.737430 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:19.737505 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:19.761268 2149446 cri.go:96] found id: ""
	I1222 01:51:19.761291 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.761300 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:19.761306 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:19.761388 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:19.787563 2149446 cri.go:96] found id: ""
	I1222 01:51:19.787587 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.787596 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:19.787602 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:19.787665 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:19.816441 2149446 cri.go:96] found id: ""
	I1222 01:51:19.816468 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.816477 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:19.816483 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:19.816540 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:19.851780 2149446 cri.go:96] found id: ""
	I1222 01:51:19.851812 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.851821 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:19.851827 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:19.851895 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:19.888751 2149446 cri.go:96] found id: ""
	I1222 01:51:19.888785 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.888794 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:19.888800 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:19.888866 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:19.923050 2149446 cri.go:96] found id: ""
	I1222 01:51:19.923078 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.923087 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:19.923093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:19.923151 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:19.948402 2149446 cri.go:96] found id: ""
	I1222 01:51:19.948435 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.948444 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:19.948451 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:19.948520 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:19.973868 2149446 cri.go:96] found id: ""
	I1222 01:51:19.973894 2149446 logs.go:282] 0 containers: []
	W1222 01:51:19.973903 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:19.973912 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:19.973942 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:20.041369 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:20.032095    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.032873    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.034426    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.035065    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.036800    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:20.032095    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.032873    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.034426    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.035065    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:20.036800    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:20.041391 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:20.041404 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:20.070678 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:20.070728 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:20.112858 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:20.112886 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:20.189261 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:20.189300 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:22.708863 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:22.719695 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:22.719790 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:22.746691 2149446 cri.go:96] found id: ""
	I1222 01:51:22.746715 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.746724 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:22.746731 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:22.746790 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:22.775484 2149446 cri.go:96] found id: ""
	I1222 01:51:22.775511 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.775521 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:22.775527 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:22.775592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:22.801739 2149446 cri.go:96] found id: ""
	I1222 01:51:22.801771 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.801781 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:22.801787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:22.801855 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:22.826408 2149446 cri.go:96] found id: ""
	I1222 01:51:22.826431 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.826439 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:22.826446 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:22.826510 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:22.857976 2149446 cri.go:96] found id: ""
	I1222 01:51:22.858000 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.858009 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:22.858016 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:22.858076 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:22.895329 2149446 cri.go:96] found id: ""
	I1222 01:51:22.895357 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.895365 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:22.895372 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:22.895431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:22.924900 2149446 cri.go:96] found id: ""
	I1222 01:51:22.924923 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.924932 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:22.924938 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:22.924997 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:22.949093 2149446 cri.go:96] found id: ""
	I1222 01:51:22.949166 2149446 logs.go:282] 0 containers: []
	W1222 01:51:22.949177 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:22.949214 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:22.949228 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:22.978825 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:22.978862 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:23.008996 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:23.009026 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:23.075469 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:23.075506 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:23.092176 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:23.092253 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:23.168462 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:23.160726    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.161296    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.162792    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.163269    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.164704    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:23.160726    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.161296    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.162792    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.163269    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:23.164704    5570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:25.669110 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:25.679471 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:25.679544 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:25.707773 2149446 cri.go:96] found id: ""
	I1222 01:51:25.707798 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.707807 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:25.707814 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:25.707872 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:25.734428 2149446 cri.go:96] found id: ""
	I1222 01:51:25.734452 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.734460 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:25.734480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:25.734536 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:25.764253 2149446 cri.go:96] found id: ""
	I1222 01:51:25.764349 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.764381 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:25.764416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:25.764515 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:25.791966 2149446 cri.go:96] found id: ""
	I1222 01:51:25.792042 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.792076 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:25.792100 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:25.792193 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:25.820057 2149446 cri.go:96] found id: ""
	I1222 01:51:25.820132 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.820167 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:25.820191 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:25.820279 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:25.854439 2149446 cri.go:96] found id: ""
	I1222 01:51:25.854514 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.854537 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:25.854556 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:25.854661 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:25.887808 2149446 cri.go:96] found id: ""
	I1222 01:51:25.887884 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.887905 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:25.887919 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:25.887996 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:25.922356 2149446 cri.go:96] found id: ""
	I1222 01:51:25.922380 2149446 logs.go:282] 0 containers: []
	W1222 01:51:25.922389 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:25.922398 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:25.922443 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:25.987720 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:25.987760 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:26.005813 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:26.005849 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:26.075026 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:26.066017    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.066714    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.068425    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.069221    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.070982    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:26.066017    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.066714    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.068425    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.069221    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:26.070982    5670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:26.075049 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:26.075061 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:26.105526 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:26.105564 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:28.644504 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:28.655061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:28.655129 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:28.679454 2149446 cri.go:96] found id: ""
	I1222 01:51:28.679477 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.679486 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:28.679492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:28.679551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:28.708993 2149446 cri.go:96] found id: ""
	I1222 01:51:28.709018 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.709027 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:28.709033 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:28.709095 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:28.733612 2149446 cri.go:96] found id: ""
	I1222 01:51:28.733637 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.733646 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:28.733652 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:28.733711 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:28.762256 2149446 cri.go:96] found id: ""
	I1222 01:51:28.762285 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.762294 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:28.762300 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:28.762359 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:28.788169 2149446 cri.go:96] found id: ""
	I1222 01:51:28.788193 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.788201 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:28.788207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:28.788266 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:28.816745 2149446 cri.go:96] found id: ""
	I1222 01:51:28.816771 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.816780 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:28.816787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:28.816852 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:28.845364 2149446 cri.go:96] found id: ""
	I1222 01:51:28.845392 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.845401 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:28.845407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:28.845469 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:28.881787 2149446 cri.go:96] found id: ""
	I1222 01:51:28.881818 2149446 logs.go:282] 0 containers: []
	W1222 01:51:28.881827 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:28.881836 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:28.881855 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:28.914780 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:28.914845 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:28.954392 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:28.954419 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:29.021327 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:29.021365 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:29.037773 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:29.037801 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:29.100009 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:29.091767    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.092439    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.093858    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.094561    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.096086    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:29.091767    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.092439    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.093858    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.094561    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:29.096086    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:31.600320 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:31.612534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:31.612605 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:31.639353 2149446 cri.go:96] found id: ""
	I1222 01:51:31.639376 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.639384 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:31.639391 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:31.639449 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:31.664903 2149446 cri.go:96] found id: ""
	I1222 01:51:31.664927 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.664936 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:31.664942 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:31.665003 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:31.694601 2149446 cri.go:96] found id: ""
	I1222 01:51:31.694626 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.694635 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:31.694642 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:31.694700 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:31.719433 2149446 cri.go:96] found id: ""
	I1222 01:51:31.719476 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.719485 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:31.719493 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:31.719573 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:31.744350 2149446 cri.go:96] found id: ""
	I1222 01:51:31.744379 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.744388 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:31.744395 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:31.744457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:31.768144 2149446 cri.go:96] found id: ""
	I1222 01:51:31.768171 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.768180 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:31.768187 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:31.768246 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:31.793170 2149446 cri.go:96] found id: ""
	I1222 01:51:31.793195 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.793204 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:31.793211 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:31.793269 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:31.817469 2149446 cri.go:96] found id: ""
	I1222 01:51:31.817495 2149446 logs.go:282] 0 containers: []
	W1222 01:51:31.817510 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:31.817520 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:31.817531 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:31.833979 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:31.834009 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:31.921313 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:31.912648    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.913191    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.914772    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.915315    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.916842    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:31.912648    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.913191    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.914772    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.915315    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:31.916842    5887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:31.921411 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:31.921424 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:31.950732 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:31.950766 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:31.979280 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:31.979390 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:34.551921 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:34.562702 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:34.562773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:34.587409 2149446 cri.go:96] found id: ""
	I1222 01:51:34.587437 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.587446 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:34.587453 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:34.587512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:34.615168 2149446 cri.go:96] found id: ""
	I1222 01:51:34.615194 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.615203 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:34.615210 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:34.615270 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:34.640950 2149446 cri.go:96] found id: ""
	I1222 01:51:34.640975 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.640983 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:34.640990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:34.641070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:34.666882 2149446 cri.go:96] found id: ""
	I1222 01:51:34.666908 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.666917 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:34.666924 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:34.667003 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:34.691815 2149446 cri.go:96] found id: ""
	I1222 01:51:34.691841 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.691850 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:34.691856 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:34.691935 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:34.716718 2149446 cri.go:96] found id: ""
	I1222 01:51:34.716744 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.716753 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:34.716760 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:34.716819 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:34.741533 2149446 cri.go:96] found id: ""
	I1222 01:51:34.741557 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.741565 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:34.741572 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:34.741638 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:34.766353 2149446 cri.go:96] found id: ""
	I1222 01:51:34.766378 2149446 logs.go:282] 0 containers: []
	W1222 01:51:34.766387 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:34.766398 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:34.766410 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:34.831354 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:34.831392 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:34.849488 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:34.849517 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:34.936942 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:34.926834    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.927637    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929290    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929888    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.931562    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:34.926834    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.927637    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929290    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.929888    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:34.931562    6013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:34.936964 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:34.936976 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:34.966069 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:34.966106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:37.498453 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:37.509387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:37.509460 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:37.536333 2149446 cri.go:96] found id: ""
	I1222 01:51:37.536418 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.536440 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:37.536458 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:37.536564 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:37.561738 2149446 cri.go:96] found id: ""
	I1222 01:51:37.561761 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.561770 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:37.561777 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:37.561835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:37.589068 2149446 cri.go:96] found id: ""
	I1222 01:51:37.589103 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.589112 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:37.589118 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:37.589187 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:37.613720 2149446 cri.go:96] found id: ""
	I1222 01:51:37.613745 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.613753 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:37.613760 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:37.613828 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:37.638911 2149446 cri.go:96] found id: ""
	I1222 01:51:37.638936 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.638944 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:37.638950 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:37.639030 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:37.664184 2149446 cri.go:96] found id: ""
	I1222 01:51:37.664209 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.664218 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:37.664224 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:37.664336 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:37.689816 2149446 cri.go:96] found id: ""
	I1222 01:51:37.689840 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.689849 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:37.689855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:37.689915 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:37.714864 2149446 cri.go:96] found id: ""
	I1222 01:51:37.714888 2149446 logs.go:282] 0 containers: []
	W1222 01:51:37.714896 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:37.714905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:37.714919 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:37.731696 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:37.731722 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:37.802911 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:37.789256    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.794884    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.795605    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.797500    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.798024    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:37.789256    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.794884    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.795605    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.797500    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:37.798024    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:37.802930 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:37.802944 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:37.831970 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:37.832005 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:37.864010 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:37.864044 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:40.445669 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:40.458164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:40.458231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:40.484132 2149446 cri.go:96] found id: ""
	I1222 01:51:40.484159 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.484167 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:40.484173 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:40.484231 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:40.510161 2149446 cri.go:96] found id: ""
	I1222 01:51:40.510187 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.510195 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:40.510201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:40.510263 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:40.537903 2149446 cri.go:96] found id: ""
	I1222 01:51:40.537929 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.537938 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:40.537944 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:40.538008 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:40.569673 2149446 cri.go:96] found id: ""
	I1222 01:51:40.569699 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.569707 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:40.569714 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:40.569773 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:40.594097 2149446 cri.go:96] found id: ""
	I1222 01:51:40.594122 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.594130 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:40.594137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:40.594199 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:40.619442 2149446 cri.go:96] found id: ""
	I1222 01:51:40.619468 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.619477 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:40.619486 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:40.619545 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:40.644458 2149446 cri.go:96] found id: ""
	I1222 01:51:40.644485 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.644494 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:40.644500 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:40.644559 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:40.669598 2149446 cri.go:96] found id: ""
	I1222 01:51:40.669625 2149446 logs.go:282] 0 containers: []
	W1222 01:51:40.669634 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:40.669642 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:40.669655 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:40.686418 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:40.686488 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:40.747161 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:40.739409    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.739815    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.740969    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.741303    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.742747    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:40.739409    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.739815    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.740969    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.741303    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:40.742747    6233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:40.747183 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:40.747195 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:40.774671 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:40.774703 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:40.806536 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:40.806568 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:43.371926 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:43.382998 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:43.383098 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:43.418960 2149446 cri.go:96] found id: ""
	I1222 01:51:43.419037 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.419060 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:43.419081 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:43.419145 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:43.447476 2149446 cri.go:96] found id: ""
	I1222 01:51:43.447502 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.447511 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:43.447517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:43.447586 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:43.474323 2149446 cri.go:96] found id: ""
	I1222 01:51:43.474357 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.474367 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:43.474374 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:43.474448 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:43.500245 2149446 cri.go:96] found id: ""
	I1222 01:51:43.500323 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.500371 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:43.500390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:43.500484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:43.526338 2149446 cri.go:96] found id: ""
	I1222 01:51:43.526361 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.526369 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:43.526375 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:43.526434 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:43.551187 2149446 cri.go:96] found id: ""
	I1222 01:51:43.551211 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.551220 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:43.551227 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:43.551288 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:43.576838 2149446 cri.go:96] found id: ""
	I1222 01:51:43.576861 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.576870 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:43.576877 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:43.576939 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:43.603211 2149446 cri.go:96] found id: ""
	I1222 01:51:43.603283 2149446 logs.go:282] 0 containers: []
	W1222 01:51:43.603303 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:43.603326 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:43.603363 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:43.671168 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:43.671202 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:43.688181 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:43.688252 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:43.752146 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:43.744254    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.744955    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746431    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746780    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.748199    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:43.744254    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.744955    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746431    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.746780    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:43.748199    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:43.752167 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:43.752178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:43.780617 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:43.780654 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:46.308524 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:46.318956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:46.319027 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:46.346481 2149446 cri.go:96] found id: ""
	I1222 01:51:46.346508 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.346517 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:46.346523 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:46.346582 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:46.375577 2149446 cri.go:96] found id: ""
	I1222 01:51:46.375603 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.375612 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:46.375618 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:46.375681 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:46.410693 2149446 cri.go:96] found id: ""
	I1222 01:51:46.410721 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.410730 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:46.410736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:46.410805 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:46.437367 2149446 cri.go:96] found id: ""
	I1222 01:51:46.437391 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.437400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:46.437407 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:46.437466 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:46.463337 2149446 cri.go:96] found id: ""
	I1222 01:51:46.463364 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.463373 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:46.463379 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:46.463436 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:46.487509 2149446 cri.go:96] found id: ""
	I1222 01:51:46.487532 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.487540 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:46.487547 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:46.487609 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:46.513965 2149446 cri.go:96] found id: ""
	I1222 01:51:46.513989 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.513998 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:46.514037 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:46.514111 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:46.539974 2149446 cri.go:96] found id: ""
	I1222 01:51:46.539999 2149446 logs.go:282] 0 containers: []
	W1222 01:51:46.540008 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:46.540017 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:46.540032 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:46.604304 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:46.604345 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:46.620864 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:46.620894 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:46.682614 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:46.674017    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.674824    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676316    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676738    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.678200    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:46.674017    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.674824    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676316    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.676738    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:46.678200    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:46.682639 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:46.682652 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:46.711167 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:46.711201 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:49.246342 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:49.256853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:49.256925 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:49.285523 2149446 cri.go:96] found id: ""
	I1222 01:51:49.285548 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.285557 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:49.285563 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:49.285621 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:49.314893 2149446 cri.go:96] found id: ""
	I1222 01:51:49.314915 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.314923 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:49.314930 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:49.314988 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:49.346252 2149446 cri.go:96] found id: ""
	I1222 01:51:49.346281 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.346290 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:49.346297 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:49.346353 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:49.380477 2149446 cri.go:96] found id: ""
	I1222 01:51:49.380505 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.380514 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:49.380521 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:49.380590 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:49.414563 2149446 cri.go:96] found id: ""
	I1222 01:51:49.414586 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.414595 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:49.414601 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:49.414660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:49.439047 2149446 cri.go:96] found id: ""
	I1222 01:51:49.439072 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.439081 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:49.439088 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:49.439171 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:49.465565 2149446 cri.go:96] found id: ""
	I1222 01:51:49.465592 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.465601 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:49.465607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:49.465685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:49.489816 2149446 cri.go:96] found id: ""
	I1222 01:51:49.489838 2149446 logs.go:282] 0 containers: []
	W1222 01:51:49.489847 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:49.489856 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:49.489871 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:49.555234 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:49.555269 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:49.573085 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:49.573124 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:49.640369 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:49.632082    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.632738    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634246    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634633    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.636132    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:49.632082    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.632738    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634246    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.634633    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:49.636132    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:49.640390 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:49.640402 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:49.668820 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:49.668855 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:52.197481 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:52.208164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:52.208283 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:52.240218 2149446 cri.go:96] found id: ""
	I1222 01:51:52.240244 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.240252 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:52.240258 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:52.240325 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:52.267476 2149446 cri.go:96] found id: ""
	I1222 01:51:52.267503 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.267512 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:52.267518 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:52.267588 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:52.292286 2149446 cri.go:96] found id: ""
	I1222 01:51:52.292310 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.292319 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:52.292332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:52.292395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:52.320140 2149446 cri.go:96] found id: ""
	I1222 01:51:52.320164 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.320173 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:52.320179 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:52.320256 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:52.353854 2149446 cri.go:96] found id: ""
	I1222 01:51:52.353888 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.353897 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:52.353903 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:52.354001 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:52.383818 2149446 cri.go:96] found id: ""
	I1222 01:51:52.383851 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.383860 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:52.383867 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:52.383958 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:52.418591 2149446 cri.go:96] found id: ""
	I1222 01:51:52.418616 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.418624 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:52.418631 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:52.418714 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:52.449289 2149446 cri.go:96] found id: ""
	I1222 01:51:52.449383 2149446 logs.go:282] 0 containers: []
	W1222 01:51:52.449407 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:52.449430 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:52.449461 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:52.525065 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:52.525107 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:52.543910 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:52.543942 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:52.607708 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:52.599699    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.600218    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.601875    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.602334    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.603935    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:52.599699    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.600218    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.601875    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.602334    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:52.603935    6688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:52.607772 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:52.607790 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:52.636841 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:52.636879 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:55.167236 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:55.178565 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:55.178664 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:55.207461 2149446 cri.go:96] found id: ""
	I1222 01:51:55.207486 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.207501 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:55.207507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:55.207570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:55.232542 2149446 cri.go:96] found id: ""
	I1222 01:51:55.232615 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.232638 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:55.232657 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:55.232741 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:55.258972 2149446 cri.go:96] found id: ""
	I1222 01:51:55.258999 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.259008 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:55.259016 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:55.259078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:55.285371 2149446 cri.go:96] found id: ""
	I1222 01:51:55.285397 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.285406 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:55.285413 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:55.285484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:55.313304 2149446 cri.go:96] found id: ""
	I1222 01:51:55.313354 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.313363 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:55.313370 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:55.313439 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:55.347910 2149446 cri.go:96] found id: ""
	I1222 01:51:55.347940 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.347948 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:55.347955 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:55.348028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:55.381847 2149446 cri.go:96] found id: ""
	I1222 01:51:55.381869 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.381877 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:55.381884 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:55.381948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:55.418137 2149446 cri.go:96] found id: ""
	I1222 01:51:55.418161 2149446 logs.go:282] 0 containers: []
	W1222 01:51:55.418169 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:55.418179 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:55.418192 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:55.484714 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:55.484752 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:55.501483 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:55.501575 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:55.578456 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:55.569912    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.570649    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572190    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572838    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.574290    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:55.569912    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.570649    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572190    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.572838    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:55.574290    6801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:51:55.578476 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:55.578487 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:55.606594 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:55.606627 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:58.138273 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:51:58.149164 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:51:58.149233 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:51:58.176492 2149446 cri.go:96] found id: ""
	I1222 01:51:58.176517 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.176525 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:51:58.176532 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:51:58.176592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:51:58.200700 2149446 cri.go:96] found id: ""
	I1222 01:51:58.200724 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.200733 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:51:58.200740 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:51:58.200799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:51:58.227476 2149446 cri.go:96] found id: ""
	I1222 01:51:58.227501 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.227511 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:51:58.227517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:51:58.227582 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:51:58.252693 2149446 cri.go:96] found id: ""
	I1222 01:51:58.252720 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.252728 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:51:58.252736 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:51:58.252801 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:51:58.277812 2149446 cri.go:96] found id: ""
	I1222 01:51:58.277835 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.277843 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:51:58.277850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:51:58.277910 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:51:58.303365 2149446 cri.go:96] found id: ""
	I1222 01:51:58.303400 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.303409 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:51:58.303432 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:51:58.303509 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:51:58.328480 2149446 cri.go:96] found id: ""
	I1222 01:51:58.328504 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.328513 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:51:58.328520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:51:58.328578 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:51:58.359399 2149446 cri.go:96] found id: ""
	I1222 01:51:58.359426 2149446 logs.go:282] 0 containers: []
	W1222 01:51:58.359436 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:51:58.359445 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:51:58.359456 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:51:58.391493 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:51:58.391529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:51:58.429463 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:51:58.429487 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:51:58.494632 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:51:58.494667 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:51:58.511259 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:51:58.511294 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:51:58.572875 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:58.564819    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.565262    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566536    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566986    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.568425    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:51:58.564819    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.565262    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566536    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.566986    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:58.568425    6927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:01.073148 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:01.084069 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:01.084185 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:01.116512 2149446 cri.go:96] found id: ""
	I1222 01:52:01.116590 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.116626 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:01.116635 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:01.116729 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:01.155908 2149446 cri.go:96] found id: ""
	I1222 01:52:01.155940 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.155948 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:01.155956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:01.156039 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:01.190906 2149446 cri.go:96] found id: ""
	I1222 01:52:01.190992 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.191015 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:01.191036 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:01.191159 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:01.228209 2149446 cri.go:96] found id: ""
	I1222 01:52:01.228284 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.228300 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:01.228311 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:01.228390 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:01.258097 2149446 cri.go:96] found id: ""
	I1222 01:52:01.258122 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.258131 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:01.258138 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:01.258257 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:01.288999 2149446 cri.go:96] found id: ""
	I1222 01:52:01.289026 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.289035 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:01.289042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:01.289107 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:01.318779 2149446 cri.go:96] found id: ""
	I1222 01:52:01.318806 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.318817 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:01.318825 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:01.318896 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:01.354526 2149446 cri.go:96] found id: ""
	I1222 01:52:01.354551 2149446 logs.go:282] 0 containers: []
	W1222 01:52:01.354560 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:01.354568 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:01.354580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:01.377079 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:01.377178 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:01.467471 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:01.456482    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.457416    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.459443    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.460466    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.462034    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:01.456482    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.457416    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.459443    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.460466    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:01.462034    7022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:01.467498 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:01.467512 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:01.497872 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:01.497924 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:01.538392 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:01.538422 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:04.105458 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:04.117694 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:04.117772 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:04.145575 2149446 cri.go:96] found id: ""
	I1222 01:52:04.145610 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.145618 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:04.145624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:04.145682 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:04.175381 2149446 cri.go:96] found id: ""
	I1222 01:52:04.175407 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.175416 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:04.175422 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:04.175501 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:04.201486 2149446 cri.go:96] found id: ""
	I1222 01:52:04.201511 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.201520 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:04.201526 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:04.201627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:04.226750 2149446 cri.go:96] found id: ""
	I1222 01:52:04.226776 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.226785 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:04.226791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:04.226868 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:04.254116 2149446 cri.go:96] found id: ""
	I1222 01:52:04.254180 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.254194 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:04.254201 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:04.254260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:04.279876 2149446 cri.go:96] found id: ""
	I1222 01:52:04.279900 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.279909 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:04.279916 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:04.279979 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:04.305403 2149446 cri.go:96] found id: ""
	I1222 01:52:04.305425 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.305433 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:04.305439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:04.305500 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:04.334607 2149446 cri.go:96] found id: ""
	I1222 01:52:04.334633 2149446 logs.go:282] 0 containers: []
	W1222 01:52:04.334641 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:04.334650 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:04.334682 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:04.373276 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:04.373306 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:04.446848 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:04.446882 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:04.464981 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:04.465008 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:04.530912 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:04.522524    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.523282    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.524828    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.525152    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.526604    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:04.522524    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.523282    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.524828    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.525152    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:04.526604    7149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:04.530931 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:04.530943 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:07.063151 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:07.073864 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:07.073937 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:07.099937 2149446 cri.go:96] found id: ""
	I1222 01:52:07.099966 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.099975 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:07.099981 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:07.100043 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:07.127907 2149446 cri.go:96] found id: ""
	I1222 01:52:07.127934 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.127943 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:07.127949 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:07.128009 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:07.162965 2149446 cri.go:96] found id: ""
	I1222 01:52:07.162991 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.162999 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:07.163006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:07.163067 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:07.192890 2149446 cri.go:96] found id: ""
	I1222 01:52:07.192916 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.192925 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:07.192932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:07.192994 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:07.221463 2149446 cri.go:96] found id: ""
	I1222 01:52:07.221495 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.221504 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:07.221512 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:07.221572 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:07.247489 2149446 cri.go:96] found id: ""
	I1222 01:52:07.247517 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.247527 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:07.247535 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:07.247600 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:07.272728 2149446 cri.go:96] found id: ""
	I1222 01:52:07.272755 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.272764 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:07.272771 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:07.272849 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:07.297364 2149446 cri.go:96] found id: ""
	I1222 01:52:07.297389 2149446 logs.go:282] 0 containers: []
	W1222 01:52:07.297398 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:07.297407 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:07.297418 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:07.325214 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:07.325251 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:07.360148 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:07.360177 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:07.432261 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:07.432295 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:07.450157 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:07.450186 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:07.516409 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:07.507778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.508648    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510237    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.512438    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:07.507778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.508648    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510237    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.510778    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:07.512438    7264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:10.017883 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:10.030135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:10.030223 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:10.060521 2149446 cri.go:96] found id: ""
	I1222 01:52:10.060547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.060556 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:10.060563 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:10.060627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:10.088074 2149446 cri.go:96] found id: ""
	I1222 01:52:10.088112 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.088124 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:10.088133 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:10.088200 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:10.130279 2149446 cri.go:96] found id: ""
	I1222 01:52:10.130306 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.130315 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:10.130321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:10.130387 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:10.163042 2149446 cri.go:96] found id: ""
	I1222 01:52:10.163067 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.163078 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:10.163085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:10.163147 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:10.190658 2149446 cri.go:96] found id: ""
	I1222 01:52:10.190684 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.190693 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:10.190700 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:10.190763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:10.218153 2149446 cri.go:96] found id: ""
	I1222 01:52:10.218181 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.218189 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:10.218197 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:10.218261 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:10.244623 2149446 cri.go:96] found id: ""
	I1222 01:52:10.244647 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.244656 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:10.244662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:10.244755 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:10.270908 2149446 cri.go:96] found id: ""
	I1222 01:52:10.270931 2149446 logs.go:282] 0 containers: []
	W1222 01:52:10.270939 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:10.270948 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:10.270959 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:10.299810 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:10.299846 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:10.331687 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:10.331716 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:10.407109 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:10.407148 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:10.425255 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:10.425286 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:10.506189 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:10.497058    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.497925    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.499600    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.500185    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.501961    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:10.497058    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.497925    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.499600    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.500185    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:10.501961    7374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:13.007836 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:13.019282 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:13.019388 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:13.045528 2149446 cri.go:96] found id: ""
	I1222 01:52:13.045553 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.045562 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:13.045568 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:13.045631 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:13.071460 2149446 cri.go:96] found id: ""
	I1222 01:52:13.071528 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.071542 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:13.071549 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:13.071623 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:13.103717 2149446 cri.go:96] found id: ""
	I1222 01:52:13.103749 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.103758 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:13.103765 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:13.103835 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:13.133730 2149446 cri.go:96] found id: ""
	I1222 01:52:13.133757 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.133772 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:13.133787 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:13.133846 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:13.170719 2149446 cri.go:96] found id: ""
	I1222 01:52:13.170799 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.170841 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:13.170860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:13.170948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:13.197725 2149446 cri.go:96] found id: ""
	I1222 01:52:13.197801 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.197823 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:13.197842 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:13.197926 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:13.224209 2149446 cri.go:96] found id: ""
	I1222 01:52:13.224404 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.224492 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:13.224517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:13.224639 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:13.249315 2149446 cri.go:96] found id: ""
	I1222 01:52:13.249353 2149446 logs.go:282] 0 containers: []
	W1222 01:52:13.249362 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:13.249371 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:13.249382 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:13.280377 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:13.280404 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:13.346072 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:13.346106 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:13.364038 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:13.364070 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:13.433131 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:13.424970    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.425643    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427164    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427697    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.429162    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:13.424970    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.425643    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427164    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.427697    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:13.429162    7486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:13.433163 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:13.433175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:15.961525 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:15.972093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:15.972167 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:15.996879 2149446 cri.go:96] found id: ""
	I1222 01:52:15.996913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:15.996922 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:15.996929 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:15.996994 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:16.024677 2149446 cri.go:96] found id: ""
	I1222 01:52:16.024700 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.024709 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:16.024715 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:16.024777 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:16.050558 2149446 cri.go:96] found id: ""
	I1222 01:52:16.050590 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.050599 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:16.050606 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:16.050666 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:16.076374 2149446 cri.go:96] found id: ""
	I1222 01:52:16.076400 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.076409 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:16.076416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:16.076476 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:16.107298 2149446 cri.go:96] found id: ""
	I1222 01:52:16.107323 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.107331 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:16.107339 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:16.107402 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:16.139868 2149446 cri.go:96] found id: ""
	I1222 01:52:16.139895 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.139904 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:16.139911 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:16.139972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:16.170608 2149446 cri.go:96] found id: ""
	I1222 01:52:16.170631 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.170640 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:16.170646 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:16.170708 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:16.195886 2149446 cri.go:96] found id: ""
	I1222 01:52:16.195913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:16.195932 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:16.195941 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:16.195954 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:16.262211 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:16.262249 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:16.280423 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:16.280452 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:16.356185 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:16.346872    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.347685    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.349384    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.350021    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.351608    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:16.346872    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.347685    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.349384    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.350021    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:16.351608    7589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:16.356258 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:16.356286 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:16.391461 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:16.391538 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:18.932633 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:18.944359 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:18.944431 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:18.971356 2149446 cri.go:96] found id: ""
	I1222 01:52:18.971379 2149446 logs.go:282] 0 containers: []
	W1222 01:52:18.971388 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:18.971394 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:18.971457 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:18.998044 2149446 cri.go:96] found id: ""
	I1222 01:52:18.998070 2149446 logs.go:282] 0 containers: []
	W1222 01:52:18.998078 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:18.998085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:18.998149 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:19.028529 2149446 cri.go:96] found id: ""
	I1222 01:52:19.028553 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.028566 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:19.028573 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:19.028633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:19.052808 2149446 cri.go:96] found id: ""
	I1222 01:52:19.052833 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.052843 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:19.052850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:19.052910 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:19.077981 2149446 cri.go:96] found id: ""
	I1222 01:52:19.078004 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.078012 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:19.078019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:19.078080 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:19.105393 2149446 cri.go:96] found id: ""
	I1222 01:52:19.105422 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.105433 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:19.105439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:19.105498 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:19.146968 2149446 cri.go:96] found id: ""
	I1222 01:52:19.146995 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.147004 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:19.147010 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:19.147075 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:19.176941 2149446 cri.go:96] found id: ""
	I1222 01:52:19.176965 2149446 logs.go:282] 0 containers: []
	W1222 01:52:19.176974 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:19.176983 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:19.176995 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:19.246864 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:19.246904 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:19.266042 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:19.266070 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:19.330582 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:19.322391    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.323105    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.324610    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.325067    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.326613    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:19.322391    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.323105    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.324610    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.325067    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:19.326613    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:19.330602 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:19.330615 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:19.366422 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:19.366457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:21.896072 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:21.906705 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:21.906774 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:21.931888 2149446 cri.go:96] found id: ""
	I1222 01:52:21.931913 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.931921 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:21.931928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:21.931988 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:21.959852 2149446 cri.go:96] found id: ""
	I1222 01:52:21.959925 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.959959 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:21.959984 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:21.960070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:21.984377 2149446 cri.go:96] found id: ""
	I1222 01:52:21.984402 2149446 logs.go:282] 0 containers: []
	W1222 01:52:21.984410 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:21.984423 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:21.984485 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:22.015185 2149446 cri.go:96] found id: ""
	I1222 01:52:22.015212 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.015221 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:22.015227 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:22.015290 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:22.041402 2149446 cri.go:96] found id: ""
	I1222 01:52:22.041427 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.041436 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:22.041442 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:22.041505 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:22.069753 2149446 cri.go:96] found id: ""
	I1222 01:52:22.069780 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.069790 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:22.069797 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:22.069863 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:22.100905 2149446 cri.go:96] found id: ""
	I1222 01:52:22.100941 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.100950 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:22.100956 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:22.101028 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:22.133734 2149446 cri.go:96] found id: ""
	I1222 01:52:22.133770 2149446 logs.go:282] 0 containers: []
	W1222 01:52:22.133778 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:22.133787 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:22.133799 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:22.219653 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:22.219690 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:22.236668 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:22.236699 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:22.306717 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:22.298969    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.299431    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301029    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301450    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.302886    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:22.298969    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.299431    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301029    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.301450    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:22.302886    7812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:22.306779 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:22.306799 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:22.334960 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:22.334996 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:24.888617 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:24.899592 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:24.899667 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:24.924136 2149446 cri.go:96] found id: ""
	I1222 01:52:24.924161 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.924171 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:24.924177 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:24.924237 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:24.949757 2149446 cri.go:96] found id: ""
	I1222 01:52:24.949782 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.949791 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:24.949797 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:24.949862 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:24.975441 2149446 cri.go:96] found id: ""
	I1222 01:52:24.975468 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.975476 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:24.975482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:24.975544 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:24.999474 2149446 cri.go:96] found id: ""
	I1222 01:52:24.999500 2149446 logs.go:282] 0 containers: []
	W1222 01:52:24.999509 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:24.999517 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:24.999624 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:25.034630 2149446 cri.go:96] found id: ""
	I1222 01:52:25.034655 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.034663 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:25.034670 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:25.034737 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:25.060352 2149446 cri.go:96] found id: ""
	I1222 01:52:25.060375 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.060385 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:25.060392 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:25.060516 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:25.088743 2149446 cri.go:96] found id: ""
	I1222 01:52:25.088768 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.088777 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:25.088784 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:25.088845 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:25.116748 2149446 cri.go:96] found id: ""
	I1222 01:52:25.116781 2149446 logs.go:282] 0 containers: []
	W1222 01:52:25.116790 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:25.116800 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:25.116813 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:25.198386 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:25.198430 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:25.216584 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:25.216619 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:25.285946 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:25.278237    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.278629    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280115    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280453    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.281691    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:25.278237    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.278629    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280115    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.280453    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:25.281691    7926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:25.285971 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:25.285986 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:25.313622 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:25.313652 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:27.852014 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:27.862934 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:27.863049 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:27.888699 2149446 cri.go:96] found id: ""
	I1222 01:52:27.888735 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.888744 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:27.888751 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:27.888822 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:27.914148 2149446 cri.go:96] found id: ""
	I1222 01:52:27.914223 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.914246 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:27.914259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:27.914337 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:27.939481 2149446 cri.go:96] found id: ""
	I1222 01:52:27.939560 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.939581 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:27.939595 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:27.939668 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:27.967633 2149446 cri.go:96] found id: ""
	I1222 01:52:27.967662 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.967671 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:27.967678 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:27.967785 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:27.991808 2149446 cri.go:96] found id: ""
	I1222 01:52:27.991889 2149446 logs.go:282] 0 containers: []
	W1222 01:52:27.991911 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:27.991932 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:27.992037 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:28.023583 2149446 cri.go:96] found id: ""
	I1222 01:52:28.023622 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.023631 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:28.023638 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:28.023709 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:28.052545 2149446 cri.go:96] found id: ""
	I1222 01:52:28.052625 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.052648 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:28.052672 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:28.052763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:28.078522 2149446 cri.go:96] found id: ""
	I1222 01:52:28.078550 2149446 logs.go:282] 0 containers: []
	W1222 01:52:28.078558 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:28.078567 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:28.078580 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:28.147757 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:28.147792 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:28.165530 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:28.165560 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:28.232073 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:28.223681    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.224341    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.225912    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.226397    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.228007    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:28.223681    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.224341    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.225912    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.226397    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:28.228007    8040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:28.232094 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:28.232107 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:28.260533 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:28.260569 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:30.796554 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:30.807742 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:30.807813 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:30.835334 2149446 cri.go:96] found id: ""
	I1222 01:52:30.835361 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.835370 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:30.835377 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:30.835467 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:30.861061 2149446 cri.go:96] found id: ""
	I1222 01:52:30.861101 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.861113 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:30.861121 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:30.861195 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:30.888925 2149446 cri.go:96] found id: ""
	I1222 01:52:30.888999 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.889021 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:30.889042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:30.889128 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:30.916374 2149446 cri.go:96] found id: ""
	I1222 01:52:30.916448 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.916470 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:30.916490 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:30.916583 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:30.941908 2149446 cri.go:96] found id: ""
	I1222 01:52:30.941932 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.941940 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:30.941947 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:30.942017 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:30.968265 2149446 cri.go:96] found id: ""
	I1222 01:52:30.968290 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.968298 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:30.968305 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:30.968372 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:30.995148 2149446 cri.go:96] found id: ""
	I1222 01:52:30.995174 2149446 logs.go:282] 0 containers: []
	W1222 01:52:30.995183 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:30.995189 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:30.995250 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:31.022834 2149446 cri.go:96] found id: ""
	I1222 01:52:31.022861 2149446 logs.go:282] 0 containers: []
	W1222 01:52:31.022870 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:31.022879 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:31.022891 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:31.083576 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:31.075792    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.076576    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.077772    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.078220    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.079697    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:31.075792    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.076576    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.077772    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.078220    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:31.079697    8142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:31.083600 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:31.083613 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:31.120484 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:31.120518 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:31.156273 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:31.156303 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:31.226587 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:31.226623 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:33.743333 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:33.753911 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:33.753986 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:33.779167 2149446 cri.go:96] found id: ""
	I1222 01:52:33.779192 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.779201 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:33.779207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:33.779265 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:33.804863 2149446 cri.go:96] found id: ""
	I1222 01:52:33.804892 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.804901 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:33.804913 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:33.804972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:33.830391 2149446 cri.go:96] found id: ""
	I1222 01:52:33.830414 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.830421 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:33.830428 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:33.830488 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:33.856024 2149446 cri.go:96] found id: ""
	I1222 01:52:33.856047 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.856057 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:33.856064 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:33.856121 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:33.885352 2149446 cri.go:96] found id: ""
	I1222 01:52:33.885375 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.885383 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:33.885389 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:33.885454 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:33.910795 2149446 cri.go:96] found id: ""
	I1222 01:52:33.910819 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.910827 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:33.910835 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:33.910922 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:33.937408 2149446 cri.go:96] found id: ""
	I1222 01:52:33.937435 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.937444 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:33.937451 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:33.937513 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:33.964011 2149446 cri.go:96] found id: ""
	I1222 01:52:33.964037 2149446 logs.go:282] 0 containers: []
	W1222 01:52:33.964046 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:33.964055 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:33.964069 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:34.030109 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:34.021515    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.022250    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.023865    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.024450    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.026222    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:34.021515    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.022250    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.023865    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.024450    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:34.026222    8255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:34.030135 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:34.030147 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:34.058869 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:34.058905 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:34.090878 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:34.090905 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:34.173693 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:34.173734 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:36.691703 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:36.703200 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:36.703287 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:36.729524 2149446 cri.go:96] found id: ""
	I1222 01:52:36.729547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.729556 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:36.729562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:36.729620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:36.755426 2149446 cri.go:96] found id: ""
	I1222 01:52:36.755451 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.755459 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:36.755465 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:36.755522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:36.780359 2149446 cri.go:96] found id: ""
	I1222 01:52:36.780384 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.780393 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:36.780399 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:36.780459 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:36.810342 2149446 cri.go:96] found id: ""
	I1222 01:52:36.810370 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.810380 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:36.810386 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:36.810447 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:36.836550 2149446 cri.go:96] found id: ""
	I1222 01:52:36.836586 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.836597 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:36.836619 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:36.836702 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:36.863450 2149446 cri.go:96] found id: ""
	I1222 01:52:36.863476 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.863485 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:36.863492 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:36.863554 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:36.890913 2149446 cri.go:96] found id: ""
	I1222 01:52:36.890938 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.890947 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:36.890953 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:36.891014 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:36.921826 2149446 cri.go:96] found id: ""
	I1222 01:52:36.921849 2149446 logs.go:282] 0 containers: []
	W1222 01:52:36.921858 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:36.921867 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:36.921879 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:36.938824 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:36.938854 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:37.002681 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:36.993721    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.994257    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.995809    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.996244    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.997718    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:36.993721    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.994257    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.995809    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.996244    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:36.997718    8371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:37.002770 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:37.002797 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:37.038076 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:37.038111 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:37.065632 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:37.065659 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:39.635717 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:39.650457 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:39.650526 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:39.678446 2149446 cri.go:96] found id: ""
	I1222 01:52:39.678470 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.678479 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:39.678485 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:39.678543 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:39.703602 2149446 cri.go:96] found id: ""
	I1222 01:52:39.703627 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.703636 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:39.703643 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:39.703702 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:39.728380 2149446 cri.go:96] found id: ""
	I1222 01:52:39.728406 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.728415 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:39.728421 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:39.728484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:39.753237 2149446 cri.go:96] found id: ""
	I1222 01:52:39.753261 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.753269 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:39.753275 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:39.753373 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:39.778213 2149446 cri.go:96] found id: ""
	I1222 01:52:39.778244 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.778253 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:39.778259 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:39.778319 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:39.808901 2149446 cri.go:96] found id: ""
	I1222 01:52:39.808925 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.808933 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:39.808940 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:39.808997 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:39.834452 2149446 cri.go:96] found id: ""
	I1222 01:52:39.834478 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.834488 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:39.834495 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:39.834556 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:39.859475 2149446 cri.go:96] found id: ""
	I1222 01:52:39.859498 2149446 logs.go:282] 0 containers: []
	W1222 01:52:39.859507 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:39.859515 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:39.859526 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:39.893788 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:39.893816 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:39.958922 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:39.958959 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:39.975346 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:39.975380 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:40.062822 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:40.052520    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.053798    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.054613    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.056904    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.057401    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:40.052520    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.053798    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.054613    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.056904    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:40.057401    8496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:40.062851 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:40.062864 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:42.596761 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:42.611989 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:42.612054 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:42.650231 2149446 cri.go:96] found id: ""
	I1222 01:52:42.650253 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.650261 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:42.650268 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:42.650326 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:42.677520 2149446 cri.go:96] found id: ""
	I1222 01:52:42.677547 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.677556 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:42.677562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:42.677620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:42.702450 2149446 cri.go:96] found id: ""
	I1222 01:52:42.702478 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.702487 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:42.702493 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:42.702549 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:42.727875 2149446 cri.go:96] found id: ""
	I1222 01:52:42.727947 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.727975 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:42.727994 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:42.728107 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:42.753077 2149446 cri.go:96] found id: ""
	I1222 01:52:42.753159 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.753187 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:42.753207 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:42.753360 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:42.779648 2149446 cri.go:96] found id: ""
	I1222 01:52:42.779671 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.779679 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:42.779686 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:42.779761 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:42.808099 2149446 cri.go:96] found id: ""
	I1222 01:52:42.808172 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.808181 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:42.808188 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:42.808247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:42.834339 2149446 cri.go:96] found id: ""
	I1222 01:52:42.834367 2149446 logs.go:282] 0 containers: []
	W1222 01:52:42.834376 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:42.834385 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:42.834399 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:42.899680 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:42.899717 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:42.916710 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:42.916749 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:42.983675 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:42.975196    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.975990    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.977681    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.978270    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.979868    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:42.975196    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.975990    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.977681    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.978270    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:42.979868    8596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:42.983698 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:42.983710 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:43.011581 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:43.011616 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:45.543456 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:45.553928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:45.554038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:45.578837 2149446 cri.go:96] found id: ""
	I1222 01:52:45.578860 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.578868 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:45.578874 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:45.578932 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:45.612293 2149446 cri.go:96] found id: ""
	I1222 01:52:45.612315 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.612323 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:45.612340 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:45.612407 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:45.648009 2149446 cri.go:96] found id: ""
	I1222 01:52:45.648043 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.648052 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:45.648059 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:45.648157 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:45.675586 2149446 cri.go:96] found id: ""
	I1222 01:52:45.675609 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.675618 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:45.675624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:45.675686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:45.702507 2149446 cri.go:96] found id: ""
	I1222 01:52:45.702576 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.702597 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:45.702610 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:45.702686 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:45.729853 2149446 cri.go:96] found id: ""
	I1222 01:52:45.729921 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.729943 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:45.729964 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:45.730047 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:45.760311 2149446 cri.go:96] found id: ""
	I1222 01:52:45.760387 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.760420 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:45.760439 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:45.760522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:45.785968 2149446 cri.go:96] found id: ""
	I1222 01:52:45.785995 2149446 logs.go:282] 0 containers: []
	W1222 01:52:45.786005 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:45.786014 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:45.786027 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:45.854201 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:45.854240 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:45.871755 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:45.871787 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:45.937315 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:45.929306    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.929882    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.931547    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.932074    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.933637    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:45.929306    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.929882    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.931547    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.932074    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:45.933637    8707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:45.937355 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:45.937368 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:45.966159 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:45.966197 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:48.499646 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:48.510238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:48.510330 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:48.536889 2149446 cri.go:96] found id: ""
	I1222 01:52:48.536923 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.536932 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:48.536939 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:48.537006 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:48.563218 2149446 cri.go:96] found id: ""
	I1222 01:52:48.563284 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.563305 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:48.563317 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:48.563393 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:48.588527 2149446 cri.go:96] found id: ""
	I1222 01:52:48.588561 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.588570 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:48.588576 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:48.588643 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:48.621974 2149446 cri.go:96] found id: ""
	I1222 01:52:48.622045 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.622080 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:48.622106 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:48.622197 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:48.654895 2149446 cri.go:96] found id: ""
	I1222 01:52:48.654970 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.654992 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:48.655014 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:48.655102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:48.692809 2149446 cri.go:96] found id: ""
	I1222 01:52:48.692885 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.692911 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:48.692931 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:48.693056 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:48.717194 2149446 cri.go:96] found id: ""
	I1222 01:52:48.717272 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.717294 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:48.717314 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:48.717425 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:48.742591 2149446 cri.go:96] found id: ""
	I1222 01:52:48.742616 2149446 logs.go:282] 0 containers: []
	W1222 01:52:48.742625 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:48.742633 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:48.742663 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:48.801798 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:48.794377    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.794846    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796207    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796539    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.798000    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:48.794377    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.794846    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796207    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.796539    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:48.798000    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:48.801831 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:48.801843 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:48.828911 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:48.828944 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:48.860768 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:48.860798 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:48.925684 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:48.925719 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:51.443762 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:51.454909 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:51.454993 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:51.484486 2149446 cri.go:96] found id: ""
	I1222 01:52:51.484511 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.484520 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:51.484526 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:51.484590 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:51.510790 2149446 cri.go:96] found id: ""
	I1222 01:52:51.510814 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.510822 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:51.510828 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:51.510886 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:51.537007 2149446 cri.go:96] found id: ""
	I1222 01:52:51.537030 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.537039 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:51.537050 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:51.537110 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:51.563454 2149446 cri.go:96] found id: ""
	I1222 01:52:51.563479 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.563487 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:51.563494 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:51.563551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:51.590407 2149446 cri.go:96] found id: ""
	I1222 01:52:51.590433 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.590442 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:51.590448 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:51.590512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:51.626498 2149446 cri.go:96] found id: ""
	I1222 01:52:51.626525 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.626533 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:51.626540 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:51.626604 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:51.653575 2149446 cri.go:96] found id: ""
	I1222 01:52:51.653601 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.653610 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:51.653616 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:51.653676 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:51.686395 2149446 cri.go:96] found id: ""
	I1222 01:52:51.686425 2149446 logs.go:282] 0 containers: []
	W1222 01:52:51.686435 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:51.686444 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:51.686457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:51.756701 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:51.756737 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:51.773423 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:51.773525 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:51.837429 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:51.829362    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.830109    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.831677    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.832019    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.833581    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:51.829362    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.830109    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.831677    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.832019    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:51.833581    8932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:51.837453 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:51.837467 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:51.870792 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:51.870830 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:54.401483 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:54.411791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:54.411865 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:54.439704 2149446 cri.go:96] found id: ""
	I1222 01:52:54.439729 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.439737 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:54.439743 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:54.439811 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:54.464997 2149446 cri.go:96] found id: ""
	I1222 01:52:54.465020 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.465029 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:54.465047 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:54.465112 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:54.489955 2149446 cri.go:96] found id: ""
	I1222 01:52:54.489979 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.489988 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:54.489994 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:54.490051 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:54.520365 2149446 cri.go:96] found id: ""
	I1222 01:52:54.520390 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.520400 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:54.520406 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:54.520463 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:54.546850 2149446 cri.go:96] found id: ""
	I1222 01:52:54.546875 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.546883 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:54.546890 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:54.546971 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:54.572315 2149446 cri.go:96] found id: ""
	I1222 01:52:54.572353 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.572363 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:54.572385 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:54.572466 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:54.606370 2149446 cri.go:96] found id: ""
	I1222 01:52:54.606405 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.606414 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:54.606421 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:54.606494 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:54.640572 2149446 cri.go:96] found id: ""
	I1222 01:52:54.640598 2149446 logs.go:282] 0 containers: []
	W1222 01:52:54.640607 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:54.640616 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:54.640628 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:54.708604 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:54.700120    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.700711    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.702391    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.703169    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.704748    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:54.700120    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.700711    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.702391    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.703169    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:54.704748    9035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:54.708624 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:54.708637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:54.737164 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:54.737199 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:52:54.768207 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:54.768291 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:54.833444 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:54.833481 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:57.350685 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:52:57.362041 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:52:57.362120 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:52:57.395509 2149446 cri.go:96] found id: ""
	I1222 01:52:57.395533 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.395542 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:52:57.395548 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:52:57.395606 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:52:57.422756 2149446 cri.go:96] found id: ""
	I1222 01:52:57.422778 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.422787 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:52:57.422794 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:52:57.422856 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:52:57.449198 2149446 cri.go:96] found id: ""
	I1222 01:52:57.449274 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.449297 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:52:57.449332 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:52:57.449410 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:52:57.475089 2149446 cri.go:96] found id: ""
	I1222 01:52:57.475116 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.475136 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:52:57.475142 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:52:57.475239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:52:57.500721 2149446 cri.go:96] found id: ""
	I1222 01:52:57.500747 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.500755 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:52:57.500763 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:52:57.500823 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:52:57.526256 2149446 cri.go:96] found id: ""
	I1222 01:52:57.526283 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.526292 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:52:57.526299 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:52:57.526361 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:52:57.551020 2149446 cri.go:96] found id: ""
	I1222 01:52:57.551098 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.551122 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:52:57.551135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:52:57.551251 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:52:57.578237 2149446 cri.go:96] found id: ""
	I1222 01:52:57.578264 2149446 logs.go:282] 0 containers: []
	W1222 01:52:57.578272 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:52:57.578281 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:52:57.578293 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:52:57.647976 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:52:57.648013 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:52:57.669609 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:52:57.669637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:52:57.736734 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:52:57.727886    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.728902    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730342    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730664    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.732062    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:52:57.727886    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.728902    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730342    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.730664    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:52:57.732062    9153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:52:57.736758 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:52:57.736777 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:52:57.765104 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:52:57.765143 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:00.296700 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:00.314134 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:00.314362 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:00.360664 2149446 cri.go:96] found id: ""
	I1222 01:53:00.360694 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.360703 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:00.360709 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:00.360787 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:00.400391 2149446 cri.go:96] found id: ""
	I1222 01:53:00.400434 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.400447 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:00.400458 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:00.400534 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:00.437859 2149446 cri.go:96] found id: ""
	I1222 01:53:00.437883 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.437892 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:00.437899 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:00.437960 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:00.468446 2149446 cri.go:96] found id: ""
	I1222 01:53:00.468514 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.468536 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:00.468555 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:00.468641 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:00.494431 2149446 cri.go:96] found id: ""
	I1222 01:53:00.494462 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.494471 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:00.494486 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:00.494564 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:00.520633 2149446 cri.go:96] found id: ""
	I1222 01:53:00.520660 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.520669 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:00.520675 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:00.520735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:00.552618 2149446 cri.go:96] found id: ""
	I1222 01:53:00.552646 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.552655 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:00.552661 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:00.552721 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:00.578469 2149446 cri.go:96] found id: ""
	I1222 01:53:00.578492 2149446 logs.go:282] 0 containers: []
	W1222 01:53:00.578500 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:00.578509 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:00.578523 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:00.650905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:00.650939 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:00.669192 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:00.669220 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:00.732631 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:00.724784    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.725580    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727143    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727454    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.728888    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:00.724784    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.725580    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727143    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.727454    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:00.728888    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:00.732650 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:00.732662 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:00.761586 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:00.761617 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:03.289824 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:03.300567 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:03.300641 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:03.326667 2149446 cri.go:96] found id: ""
	I1222 01:53:03.326690 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.326698 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:03.326705 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:03.326763 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:03.363614 2149446 cri.go:96] found id: ""
	I1222 01:53:03.363637 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.363652 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:03.363662 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:03.363743 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:03.391010 2149446 cri.go:96] found id: ""
	I1222 01:53:03.391033 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.391047 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:03.391054 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:03.391126 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:03.419391 2149446 cri.go:96] found id: ""
	I1222 01:53:03.419418 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.419426 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:03.419433 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:03.419494 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:03.445436 2149446 cri.go:96] found id: ""
	I1222 01:53:03.445461 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.445470 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:03.445476 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:03.445534 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:03.475787 2149446 cri.go:96] found id: ""
	I1222 01:53:03.475812 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.475821 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:03.475827 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:03.475894 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:03.502216 2149446 cri.go:96] found id: ""
	I1222 01:53:03.502247 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.502256 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:03.502262 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:03.502323 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:03.532273 2149446 cri.go:96] found id: ""
	I1222 01:53:03.532294 2149446 logs.go:282] 0 containers: []
	W1222 01:53:03.532303 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:03.532312 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:03.532323 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:03.548937 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:03.548964 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:03.633734 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:03.626155    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.626558    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628013    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628303    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.629725    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:03.626155    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.626558    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628013    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.628303    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:03.629725    9375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:03.633808 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:03.633837 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:03.665918 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:03.665950 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:03.698908 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:03.698933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:06.272029 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:06.282999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:06.283072 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:06.309858 2149446 cri.go:96] found id: ""
	I1222 01:53:06.309882 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.309890 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:06.309896 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:06.309954 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:06.334809 2149446 cri.go:96] found id: ""
	I1222 01:53:06.334837 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.334845 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:06.334852 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:06.334922 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:06.366780 2149446 cri.go:96] found id: ""
	I1222 01:53:06.366807 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.366816 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:06.366823 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:06.366884 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:06.396941 2149446 cri.go:96] found id: ""
	I1222 01:53:06.397018 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.397040 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:06.397057 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:06.397162 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:06.424268 2149446 cri.go:96] found id: ""
	I1222 01:53:06.424294 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.424302 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:06.424309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:06.424384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:06.451699 2149446 cri.go:96] found id: ""
	I1222 01:53:06.451723 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.451732 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:06.451739 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:06.451799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:06.480588 2149446 cri.go:96] found id: ""
	I1222 01:53:06.480610 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.480618 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:06.480624 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:06.480685 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:06.508393 2149446 cri.go:96] found id: ""
	I1222 01:53:06.508419 2149446 logs.go:282] 0 containers: []
	W1222 01:53:06.508428 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:06.508437 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:06.508449 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:06.525070 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:06.525101 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:06.590361 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:06.581622    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.582460    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.583922    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.584374    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.586045    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:06.581622    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.582460    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.583922    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.584374    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:06.586045    9490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:06.590383 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:06.590398 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:06.624081 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:06.624115 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:06.663556 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:06.663585 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:09.229815 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:09.240655 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:09.240737 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:09.265982 2149446 cri.go:96] found id: ""
	I1222 01:53:09.266004 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.266013 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:09.266019 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:09.266078 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:09.291816 2149446 cri.go:96] found id: ""
	I1222 01:53:09.291841 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.291849 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:09.291855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:09.291914 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:09.321089 2149446 cri.go:96] found id: ""
	I1222 01:53:09.321114 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.321123 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:09.321129 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:09.321187 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:09.351795 2149446 cri.go:96] found id: ""
	I1222 01:53:09.351824 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.351833 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:09.351839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:09.351897 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:09.386800 2149446 cri.go:96] found id: ""
	I1222 01:53:09.386828 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.386837 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:09.386844 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:09.386902 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:09.416549 2149446 cri.go:96] found id: ""
	I1222 01:53:09.416574 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.416582 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:09.416589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:09.416648 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:09.442626 2149446 cri.go:96] found id: ""
	I1222 01:53:09.442649 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.442662 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:09.442671 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:09.442770 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:09.468113 2149446 cri.go:96] found id: ""
	I1222 01:53:09.468187 2149446 logs.go:282] 0 containers: []
	W1222 01:53:09.468209 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:09.468230 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:09.468270 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:09.531958 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:09.523337    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.523854    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.525572    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.526159    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.527780    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:09.523337    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.523854    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.525572    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.526159    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:09.527780    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:09.531982 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:09.531994 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:09.559710 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:09.559743 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:09.601932 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:09.602030 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:09.676284 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:09.676317 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:12.195759 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:12.206480 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:12.206553 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:12.235418 2149446 cri.go:96] found id: ""
	I1222 01:53:12.235491 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.235513 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:12.235527 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:12.235611 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:12.261441 2149446 cri.go:96] found id: ""
	I1222 01:53:12.261467 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.261476 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:12.261482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:12.261551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:12.291019 2149446 cri.go:96] found id: ""
	I1222 01:53:12.291044 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.291053 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:12.291059 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:12.291117 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:12.317108 2149446 cri.go:96] found id: ""
	I1222 01:53:12.317132 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.317152 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:12.317160 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:12.317222 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:12.345136 2149446 cri.go:96] found id: ""
	I1222 01:53:12.345164 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.345173 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:12.345180 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:12.345239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:12.372959 2149446 cri.go:96] found id: ""
	I1222 01:53:12.372991 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.373000 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:12.373006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:12.373076 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:12.413013 2149446 cri.go:96] found id: ""
	I1222 01:53:12.413049 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.413058 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:12.413065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:12.413134 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:12.439197 2149446 cri.go:96] found id: ""
	I1222 01:53:12.439225 2149446 logs.go:282] 0 containers: []
	W1222 01:53:12.439235 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:12.439245 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:12.439256 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:12.469256 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:12.469285 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:12.534711 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:12.534748 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:12.551566 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:12.551637 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:12.624758 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:12.616140    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.617087    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.618606    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.619192    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.620695    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:12.616140    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.617087    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.618606    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.619192    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:12.620695    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:12.624828 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:12.624858 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:15.157013 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:15.168255 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:15.168328 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:15.195462 2149446 cri.go:96] found id: ""
	I1222 01:53:15.195486 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.195494 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:15.195501 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:15.195560 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:15.227170 2149446 cri.go:96] found id: ""
	I1222 01:53:15.227198 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.227208 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:15.227215 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:15.227282 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:15.254951 2149446 cri.go:96] found id: ""
	I1222 01:53:15.254975 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.254983 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:15.254990 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:15.255073 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:15.280554 2149446 cri.go:96] found id: ""
	I1222 01:53:15.280577 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.280586 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:15.280592 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:15.280677 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:15.306166 2149446 cri.go:96] found id: ""
	I1222 01:53:15.306189 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.306198 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:15.306203 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:15.306261 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:15.331390 2149446 cri.go:96] found id: ""
	I1222 01:53:15.331414 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.331422 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:15.331429 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:15.331487 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:15.372404 2149446 cri.go:96] found id: ""
	I1222 01:53:15.372433 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.372442 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:15.372449 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:15.372512 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:15.404854 2149446 cri.go:96] found id: ""
	I1222 01:53:15.404894 2149446 logs.go:282] 0 containers: []
	W1222 01:53:15.404902 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:15.404918 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:15.404930 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:15.469796 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:15.469833 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:15.487192 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:15.487222 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:15.550547 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:15.541963    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.542670    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.544415    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.545061    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.546601    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:15.541963    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.542670    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.544415    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.545061    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:15.546601    9829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:15.550568 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:15.550582 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:15.581523 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:15.581556 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:18.130208 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:18.141246 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:18.141343 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:18.168196 2149446 cri.go:96] found id: ""
	I1222 01:53:18.168219 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.168227 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:18.168234 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:18.168294 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:18.194001 2149446 cri.go:96] found id: ""
	I1222 01:53:18.194025 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.194034 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:18.194040 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:18.194101 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:18.220324 2149446 cri.go:96] found id: ""
	I1222 01:53:18.220354 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.220362 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:18.220369 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:18.220427 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:18.249998 2149446 cri.go:96] found id: ""
	I1222 01:53:18.250027 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.250036 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:18.250042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:18.250102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:18.275763 2149446 cri.go:96] found id: ""
	I1222 01:53:18.275789 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.275798 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:18.275805 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:18.275863 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:18.304972 2149446 cri.go:96] found id: ""
	I1222 01:53:18.305001 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.305009 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:18.305015 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:18.305074 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:18.330117 2149446 cri.go:96] found id: ""
	I1222 01:53:18.330139 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.330147 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:18.330154 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:18.330214 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:18.368632 2149446 cri.go:96] found id: ""
	I1222 01:53:18.368663 2149446 logs.go:282] 0 containers: []
	W1222 01:53:18.368672 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:18.368680 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:18.368727 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:18.442905 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:18.442946 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:18.459582 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:18.459658 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:18.528179 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:18.516076    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.516516    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.522440    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.523458    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.524496    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:18.516076    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.516516    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.522440    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.523458    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:18.524496    9943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:18.528201 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:18.528213 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:18.555680 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:18.555710 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:21.084080 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:21.094603 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:21.094678 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:21.118899 2149446 cri.go:96] found id: ""
	I1222 01:53:21.118926 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.118936 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:21.118943 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:21.119001 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:21.147861 2149446 cri.go:96] found id: ""
	I1222 01:53:21.147890 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.147898 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:21.147905 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:21.147968 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:21.172300 2149446 cri.go:96] found id: ""
	I1222 01:53:21.172328 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.172344 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:21.172350 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:21.172408 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:21.196518 2149446 cri.go:96] found id: ""
	I1222 01:53:21.196547 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.196555 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:21.196561 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:21.196621 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:21.222351 2149446 cri.go:96] found id: ""
	I1222 01:53:21.222373 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.222381 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:21.222387 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:21.222445 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:21.247014 2149446 cri.go:96] found id: ""
	I1222 01:53:21.247056 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.247065 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:21.247072 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:21.247130 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:21.272507 2149446 cri.go:96] found id: ""
	I1222 01:53:21.272530 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.272538 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:21.272545 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:21.272603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:21.298865 2149446 cri.go:96] found id: ""
	I1222 01:53:21.298930 2149446 logs.go:282] 0 containers: []
	W1222 01:53:21.298955 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:21.298969 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:21.298981 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:21.375967 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:21.367288   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.368097   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.369766   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.370303   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.371978   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:21.367288   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.368097   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.369766   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.370303   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:21.371978   10047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:21.375990 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:21.376003 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:21.407155 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:21.407194 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:21.439948 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:21.439978 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:21.506644 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:21.506681 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:24.027394 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:24.038461 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:24.038535 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:24.065524 2149446 cri.go:96] found id: ""
	I1222 01:53:24.065549 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.065558 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:24.065564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:24.065631 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:24.096630 2149446 cri.go:96] found id: ""
	I1222 01:53:24.096660 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.096670 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:24.096677 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:24.096744 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:24.122428 2149446 cri.go:96] found id: ""
	I1222 01:53:24.122453 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.122462 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:24.122468 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:24.122533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:24.147612 2149446 cri.go:96] found id: ""
	I1222 01:53:24.147634 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.147642 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:24.147649 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:24.147713 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:24.176841 2149446 cri.go:96] found id: ""
	I1222 01:53:24.176913 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.176933 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:24.176954 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:24.177042 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:24.206047 2149446 cri.go:96] found id: ""
	I1222 01:53:24.206073 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.206082 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:24.206089 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:24.206148 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:24.232264 2149446 cri.go:96] found id: ""
	I1222 01:53:24.232292 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.232301 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:24.232307 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:24.232376 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:24.258596 2149446 cri.go:96] found id: ""
	I1222 01:53:24.258624 2149446 logs.go:282] 0 containers: []
	W1222 01:53:24.258633 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:24.258642 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:24.258655 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:24.286162 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:24.286196 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:24.314664 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:24.314690 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:24.384317 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:24.384405 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:24.404914 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:24.404992 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:24.478461 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:24.470772   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.471164   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472646   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472964   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.474221   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:24.470772   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.471164   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472646   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.472964   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:24.474221   10187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:26.978694 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:26.989065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:26.989132 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:27.017546 2149446 cri.go:96] found id: ""
	I1222 01:53:27.017577 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.017586 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:27.017593 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:27.017662 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:27.047826 2149446 cri.go:96] found id: ""
	I1222 01:53:27.047855 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.047863 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:27.047871 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:27.047931 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:27.073912 2149446 cri.go:96] found id: ""
	I1222 01:53:27.073940 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.073952 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:27.073958 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:27.074018 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:27.098951 2149446 cri.go:96] found id: ""
	I1222 01:53:27.098974 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.098982 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:27.098988 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:27.099047 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:27.124715 2149446 cri.go:96] found id: ""
	I1222 01:53:27.124739 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.124748 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:27.124754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:27.124817 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:27.150398 2149446 cri.go:96] found id: ""
	I1222 01:53:27.150420 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.150431 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:27.150437 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:27.150502 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:27.176315 2149446 cri.go:96] found id: ""
	I1222 01:53:27.176347 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.176356 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:27.176386 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:27.176493 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:27.203872 2149446 cri.go:96] found id: ""
	I1222 01:53:27.203953 2149446 logs.go:282] 0 containers: []
	W1222 01:53:27.203977 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:27.204000 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:27.204034 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:27.237039 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:27.237067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:27.301398 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:27.301432 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:27.317630 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:27.317657 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:27.393864 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:27.383878   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.385765   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.387565   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.388163   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.389778   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:27.383878   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.385765   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.387565   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.388163   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:27.389778   10292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:27.393934 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:27.393960 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:29.927573 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:29.938417 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:29.938524 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:29.964036 2149446 cri.go:96] found id: ""
	I1222 01:53:29.964065 2149446 logs.go:282] 0 containers: []
	W1222 01:53:29.964073 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:29.964080 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:29.964138 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:29.988604 2149446 cri.go:96] found id: ""
	I1222 01:53:29.988632 2149446 logs.go:282] 0 containers: []
	W1222 01:53:29.988641 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:29.988647 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:29.988708 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:30.026459 2149446 cri.go:96] found id: ""
	I1222 01:53:30.026487 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.026497 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:30.026504 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:30.026573 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:30.057912 2149446 cri.go:96] found id: ""
	I1222 01:53:30.057939 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.057949 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:30.057955 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:30.058030 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:30.083599 2149446 cri.go:96] found id: ""
	I1222 01:53:30.083668 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.083690 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:30.083709 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:30.083806 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:30.110551 2149446 cri.go:96] found id: ""
	I1222 01:53:30.110620 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.110641 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:30.110655 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:30.110735 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:30.137174 2149446 cri.go:96] found id: ""
	I1222 01:53:30.137203 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.137211 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:30.137218 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:30.137277 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:30.162716 2149446 cri.go:96] found id: ""
	I1222 01:53:30.162753 2149446 logs.go:282] 0 containers: []
	W1222 01:53:30.162763 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:30.162772 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:30.162791 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:30.227759 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:30.227794 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:30.245443 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:30.245482 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:30.307769 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:30.299415   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.299980   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301457   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301852   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.303320   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:30.299415   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.299980   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301457   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.301852   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:30.303320   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:30.307792 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:30.307803 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:30.335176 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:30.335210 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:32.871091 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:32.881783 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:32.881907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:32.916036 2149446 cri.go:96] found id: ""
	I1222 01:53:32.916106 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.916128 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:32.916147 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:32.916230 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:32.943595 2149446 cri.go:96] found id: ""
	I1222 01:53:32.943619 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.943627 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:32.943634 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:32.943692 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:32.968836 2149446 cri.go:96] found id: ""
	I1222 01:53:32.968860 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.968868 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:32.968874 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:32.968956 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:32.997882 2149446 cri.go:96] found id: ""
	I1222 01:53:32.997950 2149446 logs.go:282] 0 containers: []
	W1222 01:53:32.997973 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:32.997985 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:32.998068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:33.026110 2149446 cri.go:96] found id: ""
	I1222 01:53:33.026137 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.026145 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:33.026152 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:33.026229 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:33.055281 2149446 cri.go:96] found id: ""
	I1222 01:53:33.055306 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.055315 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:33.055321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:33.055398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:33.086387 2149446 cri.go:96] found id: ""
	I1222 01:53:33.086465 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.086486 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:33.086499 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:33.086576 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:33.112687 2149446 cri.go:96] found id: ""
	I1222 01:53:33.112730 2149446 logs.go:282] 0 containers: []
	W1222 01:53:33.112740 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:33.112749 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:33.112760 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:33.181024 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:33.181067 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:33.198158 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:33.198190 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:33.273247 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:33.265089   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.265940   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.267575   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.268051   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.269666   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:33.265089   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.265940   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.267575   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.268051   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:33.269666   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:33.273341 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:33.273378 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:33.301159 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:33.301190 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:35.836720 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:35.847980 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:35.848054 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:35.879849 2149446 cri.go:96] found id: ""
	I1222 01:53:35.879879 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.879888 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:35.879894 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:35.879956 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:35.915400 2149446 cri.go:96] found id: ""
	I1222 01:53:35.915426 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.915434 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:35.915441 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:35.915511 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:35.940054 2149446 cri.go:96] found id: ""
	I1222 01:53:35.940078 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.940086 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:35.940093 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:35.940153 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:35.970602 2149446 cri.go:96] found id: ""
	I1222 01:53:35.970625 2149446 logs.go:282] 0 containers: []
	W1222 01:53:35.970634 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:35.970641 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:35.970699 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:36.000035 2149446 cri.go:96] found id: ""
	I1222 01:53:36.000060 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.000069 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:36.000076 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:36.000146 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:36.030030 2149446 cri.go:96] found id: ""
	I1222 01:53:36.030062 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.030072 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:36.030085 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:36.030160 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:36.056509 2149446 cri.go:96] found id: ""
	I1222 01:53:36.056535 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.056547 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:36.056554 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:36.056615 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:36.083227 2149446 cri.go:96] found id: ""
	I1222 01:53:36.083254 2149446 logs.go:282] 0 containers: []
	W1222 01:53:36.083262 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:36.083271 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:36.083282 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:36.112082 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:36.112120 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:36.144537 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:36.144568 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:36.209908 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:36.209989 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:36.228268 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:36.228349 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:36.290580 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:36.282527   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.283074   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284509   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284836   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.286252   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:36.282527   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.283074   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284509   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.284836   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:36.286252   10631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:38.791376 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:38.803195 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:38.803268 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:38.827957 2149446 cri.go:96] found id: ""
	I1222 01:53:38.827983 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.827991 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:38.827997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:38.828059 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:38.858193 2149446 cri.go:96] found id: ""
	I1222 01:53:38.858219 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.858232 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:38.858238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:38.858300 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:38.892136 2149446 cri.go:96] found id: ""
	I1222 01:53:38.892168 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.892177 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:38.892184 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:38.892262 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:38.921915 2149446 cri.go:96] found id: ""
	I1222 01:53:38.921984 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.921999 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:38.922008 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:38.922067 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:38.947824 2149446 cri.go:96] found id: ""
	I1222 01:53:38.947847 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.947855 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:38.947862 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:38.947921 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:38.972933 2149446 cri.go:96] found id: ""
	I1222 01:53:38.972959 2149446 logs.go:282] 0 containers: []
	W1222 01:53:38.972967 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:38.972974 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:38.973039 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:39.007055 2149446 cri.go:96] found id: ""
	I1222 01:53:39.007083 2149446 logs.go:282] 0 containers: []
	W1222 01:53:39.007092 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:39.007099 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:39.007162 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:39.033180 2149446 cri.go:96] found id: ""
	I1222 01:53:39.033210 2149446 logs.go:282] 0 containers: []
	W1222 01:53:39.033219 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:39.033228 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:39.033240 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:39.098988 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:39.099022 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:39.117169 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:39.117195 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:39.179213 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:39.171486   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.172049   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.173620   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.174041   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.175480   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:39.171486   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.172049   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.173620   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.174041   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:39.175480   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:39.179238 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:39.179250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:39.207330 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:39.207363 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:41.736154 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:41.748704 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:41.748779 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:41.774614 2149446 cri.go:96] found id: ""
	I1222 01:53:41.774640 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.774658 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:41.774668 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:41.774726 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:41.804623 2149446 cri.go:96] found id: ""
	I1222 01:53:41.804649 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.804658 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:41.804665 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:41.804723 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:41.833829 2149446 cri.go:96] found id: ""
	I1222 01:53:41.833856 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.833865 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:41.833872 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:41.833936 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:41.875465 2149446 cri.go:96] found id: ""
	I1222 01:53:41.875492 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.875500 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:41.875507 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:41.875571 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:41.905442 2149446 cri.go:96] found id: ""
	I1222 01:53:41.905467 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.905476 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:41.905482 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:41.905570 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:41.932195 2149446 cri.go:96] found id: ""
	I1222 01:53:41.932222 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.932231 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:41.932238 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:41.932323 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:41.958757 2149446 cri.go:96] found id: ""
	I1222 01:53:41.958781 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.958795 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:41.958801 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:41.958884 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:41.984634 2149446 cri.go:96] found id: ""
	I1222 01:53:41.984658 2149446 logs.go:282] 0 containers: []
	W1222 01:53:41.984666 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:41.984683 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:41.984711 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:42.012813 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:42.012851 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:42.047278 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:42.047308 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:42.115243 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:42.115288 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:42.152682 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:42.152891 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:42.276937 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:42.267520   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.268654   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.269846   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.270701   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.271864   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:42.267520   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.268654   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.269846   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.270701   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:42.271864   10854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:44.778660 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:44.788968 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:44.789038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:44.814474 2149446 cri.go:96] found id: ""
	I1222 01:53:44.814498 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.814507 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:44.814514 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:44.814572 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:44.838957 2149446 cri.go:96] found id: ""
	I1222 01:53:44.838981 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.838990 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:44.838997 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:44.839055 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:44.871015 2149446 cri.go:96] found id: ""
	I1222 01:53:44.871040 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.871048 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:44.871055 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:44.871126 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:44.901830 2149446 cri.go:96] found id: ""
	I1222 01:53:44.901852 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.901860 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:44.901866 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:44.901926 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:44.932837 2149446 cri.go:96] found id: ""
	I1222 01:53:44.932912 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.932936 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:44.932954 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:44.933034 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:44.958646 2149446 cri.go:96] found id: ""
	I1222 01:53:44.958670 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.958679 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:44.958685 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:44.958745 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:44.990447 2149446 cri.go:96] found id: ""
	I1222 01:53:44.990470 2149446 logs.go:282] 0 containers: []
	W1222 01:53:44.990479 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:44.990485 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:44.990551 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:45.030189 2149446 cri.go:96] found id: ""
	I1222 01:53:45.030216 2149446 logs.go:282] 0 containers: []
	W1222 01:53:45.030226 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:45.030236 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:45.030250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:45.105196 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:45.105243 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:45.127602 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:45.127641 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:45.224662 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:45.212653   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.213852   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.215129   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.216095   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.218056   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:45.212653   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.213852   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.215129   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.216095   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:45.218056   10954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:45.224690 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:45.224706 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:45.294896 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:45.294933 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:47.831558 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:47.843230 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:47.843313 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:47.888291 2149446 cri.go:96] found id: ""
	I1222 01:53:47.888314 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.888323 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:47.888329 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:47.888398 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:47.921941 2149446 cri.go:96] found id: ""
	I1222 01:53:47.921965 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.921973 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:47.921979 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:47.922038 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:47.948316 2149446 cri.go:96] found id: ""
	I1222 01:53:47.948349 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.948358 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:47.948373 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:47.948449 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:47.975804 2149446 cri.go:96] found id: ""
	I1222 01:53:47.975882 2149446 logs.go:282] 0 containers: []
	W1222 01:53:47.975898 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:47.975905 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:47.975969 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:48.017770 2149446 cri.go:96] found id: ""
	I1222 01:53:48.017798 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.017807 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:48.017814 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:48.017876 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:48.045819 2149446 cri.go:96] found id: ""
	I1222 01:53:48.045844 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.045853 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:48.045862 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:48.045924 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:48.072420 2149446 cri.go:96] found id: ""
	I1222 01:53:48.072446 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.072454 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:48.072467 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:48.072530 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:48.098359 2149446 cri.go:96] found id: ""
	I1222 01:53:48.098384 2149446 logs.go:282] 0 containers: []
	W1222 01:53:48.098392 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:48.098401 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:48.098443 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:48.115484 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:48.115515 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:48.182899 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:48.174594   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.175246   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.176834   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.177351   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.178997   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:48.174594   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.175246   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.176834   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.177351   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:48.178997   11066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:48.182970 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:48.182991 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:48.211848 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:48.211883 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:48.240534 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:48.240647 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:50.807734 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:50.819902 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:50.819972 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:50.856953 2149446 cri.go:96] found id: ""
	I1222 01:53:50.856980 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.856989 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:50.856995 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:50.857057 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:50.893636 2149446 cri.go:96] found id: ""
	I1222 01:53:50.893665 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.893675 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:50.893681 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:50.893742 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:50.923962 2149446 cri.go:96] found id: ""
	I1222 01:53:50.923991 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.924000 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:50.924006 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:50.924066 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:50.949904 2149446 cri.go:96] found id: ""
	I1222 01:53:50.949931 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.949940 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:50.949946 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:50.950007 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:50.979592 2149446 cri.go:96] found id: ""
	I1222 01:53:50.979619 2149446 logs.go:282] 0 containers: []
	W1222 01:53:50.979627 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:50.979633 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:50.979696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:51.006796 2149446 cri.go:96] found id: ""
	I1222 01:53:51.006823 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.006831 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:51.006838 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:51.006907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:51.036723 2149446 cri.go:96] found id: ""
	I1222 01:53:51.036750 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.036758 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:51.036765 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:51.036826 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:51.066606 2149446 cri.go:96] found id: ""
	I1222 01:53:51.066630 2149446 logs.go:282] 0 containers: []
	W1222 01:53:51.066639 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:51.066648 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:51.066661 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:51.132268 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:51.132306 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:51.150033 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:51.150064 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:51.221006 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:51.211594   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.212356   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.214275   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215136   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215987   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:51.211594   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.212356   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.214275   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215136   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:51.215987   11182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:51.221029 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:51.221042 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:51.249405 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:51.249437 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:53.781794 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:53.792491 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:53.792562 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:53.821565 2149446 cri.go:96] found id: ""
	I1222 01:53:53.821592 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.821601 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:53.821607 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:53.821664 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:53.858004 2149446 cri.go:96] found id: ""
	I1222 01:53:53.858026 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.858035 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:53.858042 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:53.858102 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:53.884816 2149446 cri.go:96] found id: ""
	I1222 01:53:53.884840 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.884848 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:53.884855 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:53.884917 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:53.910690 2149446 cri.go:96] found id: ""
	I1222 01:53:53.910763 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.910785 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:53.910806 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:53.910896 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:53.938355 2149446 cri.go:96] found id: ""
	I1222 01:53:53.938382 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.938392 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:53.938398 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:53.938484 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:53.968188 2149446 cri.go:96] found id: ""
	I1222 01:53:53.968220 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.968229 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:53.968236 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:53.968344 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:53.997318 2149446 cri.go:96] found id: ""
	I1222 01:53:53.997373 2149446 logs.go:282] 0 containers: []
	W1222 01:53:53.997382 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:53.997388 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:53.997462 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:54.025412 2149446 cri.go:96] found id: ""
	I1222 01:53:54.025514 2149446 logs.go:282] 0 containers: []
	W1222 01:53:54.025538 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:54.025561 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:54.025608 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:54.093935 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:54.085512   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.086255   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.087980   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.088420   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.090003   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:54.085512   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.086255   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.087980   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.088420   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:54.090003   11288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:54.093957 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:54.093969 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:54.123758 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:54.123795 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:54.153879 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:54.153908 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:54.220347 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:54.220396 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:56.737648 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:56.748600 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:56.748674 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:56.774130 2149446 cri.go:96] found id: ""
	I1222 01:53:56.774156 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.774164 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:56.774171 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:56.774232 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:56.800427 2149446 cri.go:96] found id: ""
	I1222 01:53:56.800458 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.800466 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:56.800472 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:56.800533 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:56.825296 2149446 cri.go:96] found id: ""
	I1222 01:53:56.825341 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.825352 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:56.825359 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:56.825417 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:56.859965 2149446 cri.go:96] found id: ""
	I1222 01:53:56.859992 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.860000 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:56.860007 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:56.860070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:56.893738 2149446 cri.go:96] found id: ""
	I1222 01:53:56.893764 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.893773 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:56.893779 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:56.893840 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:56.922780 2149446 cri.go:96] found id: ""
	I1222 01:53:56.922805 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.922814 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:56.922820 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:56.922882 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:56.951835 2149446 cri.go:96] found id: ""
	I1222 01:53:56.951865 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.951874 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:56.951881 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:56.951942 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:56.982753 2149446 cri.go:96] found id: ""
	I1222 01:53:56.982788 2149446 logs.go:282] 0 containers: []
	W1222 01:53:56.982802 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:56.982832 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:56.982859 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:53:57.054734 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:53:57.054771 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:53:57.071366 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:53:57.071447 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:53:57.139693 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:53:57.131885   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.132329   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134013   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134495   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.135751   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:53:57.131885   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.132329   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134013   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.134495   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:53:57.135751   11407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:53:57.139715 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:57.139728 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:57.168869 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:57.168907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:59.701462 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:53:59.712252 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:53:59.712324 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:53:59.737831 2149446 cri.go:96] found id: ""
	I1222 01:53:59.737856 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.737865 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:53:59.737872 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:53:59.737935 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:53:59.765066 2149446 cri.go:96] found id: ""
	I1222 01:53:59.765090 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.765099 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:53:59.765105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:53:59.765165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:53:59.789855 2149446 cri.go:96] found id: ""
	I1222 01:53:59.789878 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.789887 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:53:59.789893 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:53:59.789965 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:53:59.815579 2149446 cri.go:96] found id: ""
	I1222 01:53:59.815601 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.815609 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:53:59.815616 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:53:59.815676 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:53:59.840029 2149446 cri.go:96] found id: ""
	I1222 01:53:59.840051 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.840059 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:53:59.840066 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:53:59.840127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:53:59.870757 2149446 cri.go:96] found id: ""
	I1222 01:53:59.870790 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.870804 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:53:59.870815 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:53:59.870883 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:53:59.899916 2149446 cri.go:96] found id: ""
	I1222 01:53:59.899941 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.899954 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:53:59.899962 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:53:59.900029 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:53:59.927188 2149446 cri.go:96] found id: ""
	I1222 01:53:59.927221 2149446 logs.go:282] 0 containers: []
	W1222 01:53:59.927230 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:53:59.927239 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:53:59.927250 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:53:59.957210 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:53:59.957247 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:53:59.988485 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:53:59.988515 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:00.082944 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:00.082993 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:00.133491 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:00.133527 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:00.297208 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:00.279334   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.280543   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.281714   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.282703   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.290162   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:00.279334   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.280543   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.281714   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.282703   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:00.290162   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:02.797491 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:02.810137 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:02.810229 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:02.840273 2149446 cri.go:96] found id: ""
	I1222 01:54:02.840297 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.840305 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:02.840312 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:02.840420 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:02.875715 2149446 cri.go:96] found id: ""
	I1222 01:54:02.875739 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.875747 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:02.875754 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:02.875813 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:02.919822 2149446 cri.go:96] found id: ""
	I1222 01:54:02.919846 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.919854 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:02.919860 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:02.919919 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:02.948124 2149446 cri.go:96] found id: ""
	I1222 01:54:02.948146 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.948154 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:02.948160 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:02.948219 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:02.974340 2149446 cri.go:96] found id: ""
	I1222 01:54:02.974363 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.974372 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:02.974378 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:02.974438 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:02.999271 2149446 cri.go:96] found id: ""
	I1222 01:54:02.999296 2149446 logs.go:282] 0 containers: []
	W1222 01:54:02.999305 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:02.999312 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:02.999385 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:03.030970 2149446 cri.go:96] found id: ""
	I1222 01:54:03.030993 2149446 logs.go:282] 0 containers: []
	W1222 01:54:03.031002 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:03.031008 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:03.031068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:03.056002 2149446 cri.go:96] found id: ""
	I1222 01:54:03.056029 2149446 logs.go:282] 0 containers: []
	W1222 01:54:03.056038 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:03.056047 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:03.056058 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:03.125375 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:03.125411 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:03.145686 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:03.145716 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:03.218509 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:03.209113   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.210343   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.211031   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.212871   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.213440   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:03.209113   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.210343   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.211031   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.212871   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:03.213440   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:03.218530 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:03.218542 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:03.248642 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:03.248677 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:05.780702 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:05.791068 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:05.791146 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:05.816364 2149446 cri.go:96] found id: ""
	I1222 01:54:05.816438 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.816460 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:05.816477 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:05.816563 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:05.841188 2149446 cri.go:96] found id: ""
	I1222 01:54:05.841211 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.841220 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:05.841226 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:05.841288 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:05.881850 2149446 cri.go:96] found id: ""
	I1222 01:54:05.881872 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.881881 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:05.881887 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:05.881948 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:05.916551 2149446 cri.go:96] found id: ""
	I1222 01:54:05.916573 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.916582 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:05.916589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:05.916651 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:05.943199 2149446 cri.go:96] found id: ""
	I1222 01:54:05.943226 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.943234 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:05.943240 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:05.943305 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:05.970571 2149446 cri.go:96] found id: ""
	I1222 01:54:05.970596 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.970605 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:05.970611 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:05.970671 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:05.995460 2149446 cri.go:96] found id: ""
	I1222 01:54:05.995526 2149446 logs.go:282] 0 containers: []
	W1222 01:54:05.995547 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:05.995567 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:05.995644 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:06.023405 2149446 cri.go:96] found id: ""
	I1222 01:54:06.023433 2149446 logs.go:282] 0 containers: []
	W1222 01:54:06.023442 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:06.023451 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:06.023462 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:06.052389 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:06.052429 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:06.081830 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:06.081857 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:06.148270 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:06.148310 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:06.164614 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:06.164642 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:06.229843 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:06.221234   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.222106   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.223684   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.224107   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.225591   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:06.221234   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.222106   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.223684   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.224107   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:06.225591   11765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:08.730857 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:08.741318 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:08.741413 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:08.767143 2149446 cri.go:96] found id: ""
	I1222 01:54:08.767169 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.767182 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:08.767190 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:08.767248 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:08.791625 2149446 cri.go:96] found id: ""
	I1222 01:54:08.791658 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.791667 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:08.791673 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:08.791739 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:08.816602 2149446 cri.go:96] found id: ""
	I1222 01:54:08.816636 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.816645 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:08.816652 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:08.816718 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:08.844185 2149446 cri.go:96] found id: ""
	I1222 01:54:08.844261 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.844283 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:08.844302 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:08.844404 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:08.883312 2149446 cri.go:96] found id: ""
	I1222 01:54:08.883384 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.883408 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:08.883428 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:08.883517 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:08.911421 2149446 cri.go:96] found id: ""
	I1222 01:54:08.911457 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.911466 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:08.911473 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:08.911538 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:08.942750 2149446 cri.go:96] found id: ""
	I1222 01:54:08.942775 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.942784 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:08.942791 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:08.942848 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:08.967789 2149446 cri.go:96] found id: ""
	I1222 01:54:08.967813 2149446 logs.go:282] 0 containers: []
	W1222 01:54:08.967822 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:08.967831 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:08.967842 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:09.033639 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:09.033675 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:09.052146 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:09.052175 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:09.116330 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:09.108758   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.109130   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110675   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110988   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.112489   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:09.108758   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.109130   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110675   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.110988   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:09.112489   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:09.116356 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:09.116368 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:09.144973 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:09.145004 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:11.673475 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:11.684547 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:11.684626 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:11.710519 2149446 cri.go:96] found id: ""
	I1222 01:54:11.710548 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.710557 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:11.710564 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:11.710626 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:11.735548 2149446 cri.go:96] found id: ""
	I1222 01:54:11.735572 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.735582 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:11.735589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:11.735654 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:11.760929 2149446 cri.go:96] found id: ""
	I1222 01:54:11.760954 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.760963 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:11.760969 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:11.761033 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:11.787093 2149446 cri.go:96] found id: ""
	I1222 01:54:11.787120 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.787128 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:11.787135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:11.787193 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:11.812911 2149446 cri.go:96] found id: ""
	I1222 01:54:11.812936 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.812945 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:11.812951 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:11.813015 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:11.838482 2149446 cri.go:96] found id: ""
	I1222 01:54:11.838505 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.838514 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:11.838520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:11.838583 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:11.866259 2149446 cri.go:96] found id: ""
	I1222 01:54:11.866286 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.866306 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:11.866313 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:11.866384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:11.893985 2149446 cri.go:96] found id: ""
	I1222 01:54:11.894008 2149446 logs.go:282] 0 containers: []
	W1222 01:54:11.894016 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:11.894025 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:11.894037 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:11.913582 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:11.913611 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:11.981873 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:11.974736   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.975183   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.976706   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.977106   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.978227   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:11.974736   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.975183   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.976706   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.977106   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:11.978227   11974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:11.981895 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:11.981907 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:12.010106 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:12.010148 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:12.040439 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:12.040469 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:14.613993 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:14.626494 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:14.626566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:14.659356 2149446 cri.go:96] found id: ""
	I1222 01:54:14.659378 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.659386 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:14.659392 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:14.659453 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:14.686289 2149446 cri.go:96] found id: ""
	I1222 01:54:14.686311 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.686319 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:14.686326 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:14.686382 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:14.711394 2149446 cri.go:96] found id: ""
	I1222 01:54:14.711422 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.711432 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:14.711438 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:14.711497 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:14.739481 2149446 cri.go:96] found id: ""
	I1222 01:54:14.739551 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.739573 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:14.739587 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:14.739660 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:14.768068 2149446 cri.go:96] found id: ""
	I1222 01:54:14.768090 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.768098 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:14.768105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:14.768164 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:14.794022 2149446 cri.go:96] found id: ""
	I1222 01:54:14.794045 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.794054 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:14.794061 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:14.794122 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:14.826016 2149446 cri.go:96] found id: ""
	I1222 01:54:14.826083 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.826098 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:14.826105 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:14.826165 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:14.862200 2149446 cri.go:96] found id: ""
	I1222 01:54:14.862227 2149446 logs.go:282] 0 containers: []
	W1222 01:54:14.862238 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:14.862247 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:14.862266 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:14.936812 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:14.936849 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:14.955319 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:14.955350 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:15.031542 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:15.022418   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.023307   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025087   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025579   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.027225   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:15.022418   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.023307   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025087   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.025579   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:15.027225   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:15.031567 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:15.031581 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:15.060447 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:15.060486 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:17.589430 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:17.599883 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:17.600008 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:17.629312 2149446 cri.go:96] found id: ""
	I1222 01:54:17.629354 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.629363 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:17.629370 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:17.629430 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:17.666013 2149446 cri.go:96] found id: ""
	I1222 01:54:17.666039 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.666048 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:17.666054 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:17.666114 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:17.690505 2149446 cri.go:96] found id: ""
	I1222 01:54:17.690528 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.690536 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:17.690543 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:17.690603 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:17.716599 2149446 cri.go:96] found id: ""
	I1222 01:54:17.716623 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.716631 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:17.716647 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:17.716706 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:17.741489 2149446 cri.go:96] found id: ""
	I1222 01:54:17.741513 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.741522 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:17.741528 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:17.741587 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:17.770157 2149446 cri.go:96] found id: ""
	I1222 01:54:17.770183 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.770192 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:17.770198 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:17.770286 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:17.794804 2149446 cri.go:96] found id: ""
	I1222 01:54:17.794828 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.794843 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:17.794850 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:17.794912 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:17.820386 2149446 cri.go:96] found id: ""
	I1222 01:54:17.820411 2149446 logs.go:282] 0 containers: []
	W1222 01:54:17.820421 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:17.820429 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:17.820441 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:17.886892 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:17.886927 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:17.906329 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:17.906358 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:17.968971 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:17.961422   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.961912   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963366   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963698   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.965104   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:17.961422   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.961912   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963366   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.963698   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:17.965104   12199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:17.968994 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:17.969007 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:17.997165 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:17.997196 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:20.530538 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:20.541178 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:20.541247 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:20.567334 2149446 cri.go:96] found id: ""
	I1222 01:54:20.567358 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.567367 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:20.567373 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:20.567433 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:20.594495 2149446 cri.go:96] found id: ""
	I1222 01:54:20.594519 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.594528 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:20.594534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:20.594592 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:20.627810 2149446 cri.go:96] found id: ""
	I1222 01:54:20.627837 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.627846 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:20.627853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:20.627913 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:20.659100 2149446 cri.go:96] found id: ""
	I1222 01:54:20.659127 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.659136 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:20.659142 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:20.659203 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:20.684059 2149446 cri.go:96] found id: ""
	I1222 01:54:20.684083 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.684092 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:20.684099 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:20.684158 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:20.708264 2149446 cri.go:96] found id: ""
	I1222 01:54:20.708289 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.708298 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:20.708305 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:20.708373 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:20.732615 2149446 cri.go:96] found id: ""
	I1222 01:54:20.732640 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.732649 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:20.732656 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:20.732718 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:20.764171 2149446 cri.go:96] found id: ""
	I1222 01:54:20.764198 2149446 logs.go:282] 0 containers: []
	W1222 01:54:20.764207 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:20.764215 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:20.764228 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:20.780713 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:20.780742 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:20.851938 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:20.843189   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.843837   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.845556   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.846092   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.847691   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:20.843189   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.843837   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.845556   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.846092   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:20.847691   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:20.851961 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:20.851974 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:20.893661 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:20.893699 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:20.931494 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:20.931521 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:23.498941 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:23.509907 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:23.509983 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:23.539967 2149446 cri.go:96] found id: ""
	I1222 01:54:23.539997 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.540005 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:23.540012 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:23.540071 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:23.569856 2149446 cri.go:96] found id: ""
	I1222 01:54:23.569884 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.569894 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:23.569900 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:23.569961 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:23.604014 2149446 cri.go:96] found id: ""
	I1222 01:54:23.604082 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.604103 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:23.604124 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:23.604217 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:23.632481 2149446 cri.go:96] found id: ""
	I1222 01:54:23.632510 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.632519 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:23.632525 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:23.632586 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:23.663814 2149446 cri.go:96] found id: ""
	I1222 01:54:23.663838 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.663847 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:23.663853 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:23.663916 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:23.693790 2149446 cri.go:96] found id: ""
	I1222 01:54:23.693815 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.693824 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:23.693831 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:23.693891 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:23.722928 2149446 cri.go:96] found id: ""
	I1222 01:54:23.722995 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.723009 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:23.723017 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:23.723077 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:23.749108 2149446 cri.go:96] found id: ""
	I1222 01:54:23.749133 2149446 logs.go:282] 0 containers: []
	W1222 01:54:23.749142 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:23.749151 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:23.749162 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:23.814472 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:23.814506 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:23.831927 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:23.831957 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:23.920909 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:23.912937   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.913679   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915294   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915601   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.917055   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:23.912937   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.913679   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915294   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.915601   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:23.917055   12424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:23.920931 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:23.920943 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:23.949424 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:23.949460 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:26.476761 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:26.487732 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:26.487807 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:26.517071 2149446 cri.go:96] found id: ""
	I1222 01:54:26.517147 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.517162 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:26.517170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:26.517245 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:26.542078 2149446 cri.go:96] found id: ""
	I1222 01:54:26.542101 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.542110 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:26.542116 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:26.542177 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:26.567743 2149446 cri.go:96] found id: ""
	I1222 01:54:26.567767 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.567775 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:26.567782 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:26.567843 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:26.595280 2149446 cri.go:96] found id: ""
	I1222 01:54:26.595307 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.595316 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:26.595322 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:26.595382 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:26.624206 2149446 cri.go:96] found id: ""
	I1222 01:54:26.624232 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.624241 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:26.624248 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:26.624308 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:26.658482 2149446 cri.go:96] found id: ""
	I1222 01:54:26.658508 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.658516 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:26.658523 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:26.658581 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:26.686129 2149446 cri.go:96] found id: ""
	I1222 01:54:26.686152 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.686161 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:26.686168 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:26.686249 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:26.711709 2149446 cri.go:96] found id: ""
	I1222 01:54:26.711736 2149446 logs.go:282] 0 containers: []
	W1222 01:54:26.711745 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:26.711754 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:26.711764 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:26.778620 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:26.778656 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:26.795417 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:26.795446 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:26.867101 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:26.857306   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.858065   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.859736   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.860499   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.862034   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:26.857306   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.858065   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.859736   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.860499   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:26.862034   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:26.867125 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:26.867141 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:26.902503 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:26.902544 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:29.434518 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:29.445170 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:29.445239 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:29.470088 2149446 cri.go:96] found id: ""
	I1222 01:54:29.470111 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.470120 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:29.470126 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:29.470190 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:29.494531 2149446 cri.go:96] found id: ""
	I1222 01:54:29.494553 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.494562 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:29.494568 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:29.494627 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:29.519108 2149446 cri.go:96] found id: ""
	I1222 01:54:29.519131 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.519139 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:29.519146 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:29.519212 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:29.544138 2149446 cri.go:96] found id: ""
	I1222 01:54:29.544162 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.544170 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:29.544176 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:29.544237 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:29.569957 2149446 cri.go:96] found id: ""
	I1222 01:54:29.569982 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.569991 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:29.569999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:29.570068 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:29.601725 2149446 cri.go:96] found id: ""
	I1222 01:54:29.601748 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.601756 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:29.601763 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:29.601826 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:29.631880 2149446 cri.go:96] found id: ""
	I1222 01:54:29.631904 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.631912 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:29.631918 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:29.631981 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:29.664642 2149446 cri.go:96] found id: ""
	I1222 01:54:29.664669 2149446 logs.go:282] 0 containers: []
	W1222 01:54:29.664678 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:29.664686 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:29.664698 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:29.732517 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:29.732561 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:29.751836 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:29.751868 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:29.825936 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:29.817178   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.817918   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.819621   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.820393   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.822050   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:29.817178   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.817918   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.819621   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.820393   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:29.822050   12646 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:29.825958 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:29.825971 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:29.860213 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:29.860248 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:32.399119 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:32.410321 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:32.410395 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:32.436535 2149446 cri.go:96] found id: ""
	I1222 01:54:32.436567 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.436576 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:32.436589 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:32.436667 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:32.462497 2149446 cri.go:96] found id: ""
	I1222 01:54:32.462522 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.462531 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:32.462538 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:32.462598 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:32.488532 2149446 cri.go:96] found id: ""
	I1222 01:54:32.488607 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.488631 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:32.488651 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:32.488727 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:32.513915 2149446 cri.go:96] found id: ""
	I1222 01:54:32.513939 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.513956 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:32.513980 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:32.514061 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:32.541025 2149446 cri.go:96] found id: ""
	I1222 01:54:32.541050 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.541060 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:32.541067 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:32.541130 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:32.568517 2149446 cri.go:96] found id: ""
	I1222 01:54:32.568544 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.568553 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:32.568561 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:32.568633 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:32.596417 2149446 cri.go:96] found id: ""
	I1222 01:54:32.596444 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.596453 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:32.596459 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:32.596522 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:32.629445 2149446 cri.go:96] found id: ""
	I1222 01:54:32.629473 2149446 logs.go:282] 0 containers: []
	W1222 01:54:32.629482 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:32.629491 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:32.629504 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:32.700379 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:32.700415 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:32.717228 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:32.717259 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:32.783507 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:32.774480   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.775245   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777042   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.779225   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:32.774480   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.775245   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777042   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.777779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:32.779225   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:32.783529 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:32.783541 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:32.812359 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:32.812415 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:35.344384 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:35.357066 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:35.357137 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:35.387791 2149446 cri.go:96] found id: ""
	I1222 01:54:35.387821 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.387832 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:35.387839 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:35.387907 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:35.413270 2149446 cri.go:96] found id: ""
	I1222 01:54:35.413294 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.413303 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:35.413309 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:35.413385 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:35.438755 2149446 cri.go:96] found id: ""
	I1222 01:54:35.438783 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.438791 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:35.438798 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:35.438859 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:35.464735 2149446 cri.go:96] found id: ""
	I1222 01:54:35.464761 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.464771 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:35.464778 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:35.464838 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:35.490615 2149446 cri.go:96] found id: ""
	I1222 01:54:35.490639 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.490647 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:35.490654 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:35.490722 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:35.520602 2149446 cri.go:96] found id: ""
	I1222 01:54:35.520628 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.520637 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:35.520644 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:35.520706 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:35.545887 2149446 cri.go:96] found id: ""
	I1222 01:54:35.545911 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.545919 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:35.545928 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:35.545989 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:35.571922 2149446 cri.go:96] found id: ""
	I1222 01:54:35.571945 2149446 logs.go:282] 0 containers: []
	W1222 01:54:35.571953 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:35.571962 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:35.571973 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:35.641377 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:35.641458 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:35.658922 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:35.659080 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:35.721523 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:35.714066   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.714428   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.715873   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.716191   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.717743   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:35.714066   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.714428   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.715873   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.716191   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:35.717743   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:35.721545 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:35.721560 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:35.749350 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:35.749380 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:38.281427 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:38.293044 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:38.293127 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:38.321491 2149446 cri.go:96] found id: ""
	I1222 01:54:38.321519 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.321528 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:38.321534 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:38.321593 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:38.346163 2149446 cri.go:96] found id: ""
	I1222 01:54:38.346238 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.346261 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:38.346279 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:38.346367 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:38.372025 2149446 cri.go:96] found id: ""
	I1222 01:54:38.372050 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.372058 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:38.372065 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:38.372133 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:38.399739 2149446 cri.go:96] found id: ""
	I1222 01:54:38.399762 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.399770 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:38.399780 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:38.399844 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:38.424497 2149446 cri.go:96] found id: ""
	I1222 01:54:38.424565 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.424589 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:38.424608 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:38.424696 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:38.449671 2149446 cri.go:96] found id: ""
	I1222 01:54:38.449699 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.449708 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:38.449714 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:38.449776 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:38.477607 2149446 cri.go:96] found id: ""
	I1222 01:54:38.477674 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.477697 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:38.477710 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:38.477785 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:38.506176 2149446 cri.go:96] found id: ""
	I1222 01:54:38.506204 2149446 logs.go:282] 0 containers: []
	W1222 01:54:38.506212 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:38.506220 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:38.506231 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:38.533790 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:38.533822 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:38.561674 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:38.561700 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:38.628458 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:38.628502 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:38.649997 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:38.650027 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:38.712931 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:38.704109   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.704905   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.706476   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.707004   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.708610   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:38.704109   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.704905   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.706476   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.707004   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:38.708610   13001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:41.213505 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:41.225530 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:41.225616 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:41.255991 2149446 cri.go:96] found id: ""
	I1222 01:54:41.256021 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.256029 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:41.256036 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:41.256094 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:41.282289 2149446 cri.go:96] found id: ""
	I1222 01:54:41.282314 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.282333 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:41.282339 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:41.282419 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:41.308460 2149446 cri.go:96] found id: ""
	I1222 01:54:41.308488 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.308499 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:41.308505 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:41.308566 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:41.333516 2149446 cri.go:96] found id: ""
	I1222 01:54:41.333547 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.333555 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:41.333562 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:41.333620 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:41.359576 2149446 cri.go:96] found id: ""
	I1222 01:54:41.359598 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.359607 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:41.359614 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:41.359672 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:41.384042 2149446 cri.go:96] found id: ""
	I1222 01:54:41.384066 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.384075 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:41.384082 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:41.384142 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:41.413519 2149446 cri.go:96] found id: ""
	I1222 01:54:41.413549 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.413559 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:41.413565 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:41.413624 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:41.454139 2149446 cri.go:96] found id: ""
	I1222 01:54:41.454162 2149446 logs.go:282] 0 containers: []
	W1222 01:54:41.454171 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:41.454180 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:41.454191 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:41.482322 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:41.482355 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:41.511950 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:41.511979 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:41.577845 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:41.577882 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:41.595747 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:41.595826 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:41.676481 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:41.667994   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.668497   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670197   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670551   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.672124   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:41.667994   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.668497   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670197   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.670551   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:41.672124   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:44.178129 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:44.189537 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:44.189613 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:44.217535 2149446 cri.go:96] found id: ""
	I1222 01:54:44.217563 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.217572 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:44.217580 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:44.217646 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:44.243271 2149446 cri.go:96] found id: ""
	I1222 01:54:44.243299 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.243308 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:44.243315 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:44.243379 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:44.272096 2149446 cri.go:96] found id: ""
	I1222 01:54:44.272120 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.272129 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:44.272135 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:44.272194 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:44.298955 2149446 cri.go:96] found id: ""
	I1222 01:54:44.298983 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.298993 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:44.298999 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:44.299070 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:44.324703 2149446 cri.go:96] found id: ""
	I1222 01:54:44.324728 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.324737 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:44.324743 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:44.324834 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:44.350284 2149446 cri.go:96] found id: ""
	I1222 01:54:44.350307 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.350316 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:44.350322 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:44.350384 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:44.378349 2149446 cri.go:96] found id: ""
	I1222 01:54:44.378375 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.378384 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:44.378390 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:44.378453 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:44.408220 2149446 cri.go:96] found id: ""
	I1222 01:54:44.408244 2149446 logs.go:282] 0 containers: []
	W1222 01:54:44.408258 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:44.408271 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:44.408283 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:44.474431 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:44.474465 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:44.491072 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:44.491099 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:44.556034 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:44.547925   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.548318   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.549960   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.550258   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.551701   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:44.547925   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.548318   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.549960   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.550258   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:44.551701   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:44.556055 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:44.556068 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:44.584394 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:44.584431 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:47.124887 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:47.137092 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:54:47.137169 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:54:47.164240 2149446 cri.go:96] found id: ""
	I1222 01:54:47.164268 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.164277 2149446 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:54:47.164284 2149446 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 01:54:47.164343 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:54:47.189698 2149446 cri.go:96] found id: ""
	I1222 01:54:47.189723 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.189732 2149446 logs.go:284] No container was found matching "etcd"
	I1222 01:54:47.189739 2149446 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 01:54:47.189799 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:54:47.215248 2149446 cri.go:96] found id: ""
	I1222 01:54:47.215276 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.215285 2149446 logs.go:284] No container was found matching "coredns"
	I1222 01:54:47.215291 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:54:47.215350 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:54:47.241376 2149446 cri.go:96] found id: ""
	I1222 01:54:47.241401 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.241409 2149446 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:54:47.241416 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:54:47.241475 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:54:47.266358 2149446 cri.go:96] found id: ""
	I1222 01:54:47.266383 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.266392 2149446 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:54:47.266400 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:54:47.266477 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:54:47.292142 2149446 cri.go:96] found id: ""
	I1222 01:54:47.292167 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.292175 2149446 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:54:47.292182 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 01:54:47.292260 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:54:47.318478 2149446 cri.go:96] found id: ""
	I1222 01:54:47.318503 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.318511 2149446 logs.go:284] No container was found matching "kindnet"
	I1222 01:54:47.318520 2149446 cri.go:61] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:54:47.318581 2149446 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:54:47.343233 2149446 cri.go:96] found id: ""
	I1222 01:54:47.343313 2149446 logs.go:282] 0 containers: []
	W1222 01:54:47.343328 2149446 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:54:47.343338 2149446 logs.go:123] Gathering logs for dmesg ...
	I1222 01:54:47.343351 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:54:47.360045 2149446 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:54:47.360074 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:54:47.426418 2149446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:54:47.417312   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.418166   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.419870   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.420511   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.422229   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:54:47.417312   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.418166   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.419870   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.420511   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:54:47.422229   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:54:47.426442 2149446 logs.go:123] Gathering logs for CRI-O ...
	I1222 01:54:47.426457 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1222 01:54:47.455495 2149446 logs.go:123] Gathering logs for container status ...
	I1222 01:54:47.455529 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:54:47.485526 2149446 logs.go:123] Gathering logs for kubelet ...
	I1222 01:54:47.485555 2149446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:54:50.052484 2149446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:50.066978 2149446 out.go:203] 
	W1222 01:54:50.069810 2149446 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1222 01:54:50.069848 2149446 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1222 01:54:50.069857 2149446 out.go:285] * Related issues:
	W1222 01:54:50.069872 2149446 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1222 01:54:50.069887 2149446 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1222 01:54:50.072764 2149446 out.go:203] 
	
	
	==> CRI-O <==
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.44871456Z" level=info msg="Using the internal default seccomp profile"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448722183Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448729674Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448735557Z" level=info msg="RDT not available in the host system"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.448747987Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449934856Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449963803Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.449983872Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450671237Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450694973Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.450859498Z" level=info msg="Updated default CNI network name to "
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.451450093Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.451791835Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.45184593Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.49846563Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498673066Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498765978Z" level=info msg="Create NRI interface"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498873749Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498887845Z" level=info msg="runtime interface created"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498899816Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498906454Z" level=info msg="runtime interface starting up..."
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498912689Z" level=info msg="starting plugins..."
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.498927811Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:48:46 newest-cni-704217 crio[620]: time="2025-12-22T01:48:46.49901347Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:48:46 newest-cni-704217 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:55:02.975697   13986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:55:02.976480   13986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:55:02.978091   13986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:55:02.978449   13986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:55:02.979984   13986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 01:07] overlayfs: idmapped layers are currently not supported
	[ +36.618030] overlayfs: idmapped layers are currently not supported
	[Dec22 01:08] overlayfs: idmapped layers are currently not supported
	[Dec22 01:09] overlayfs: idmapped layers are currently not supported
	[  +4.686439] overlayfs: idmapped layers are currently not supported
	[Dec22 01:10] overlayfs: idmapped layers are currently not supported
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:55:03 up 1 day,  8:37,  0 user,  load average: 0.62, 0.70, 1.18
	Linux newest-cni-704217 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:55:00 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:55:01 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 496.
	Dec 22 01:55:01 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:55:01 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:55:01 newest-cni-704217 kubelet[13874]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:55:01 newest-cni-704217 kubelet[13874]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:55:01 newest-cni-704217 kubelet[13874]: E1222 01:55:01.463672   13874 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:55:01 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:55:01 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:55:02 newest-cni-704217 kubelet[13888]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:55:02 newest-cni-704217 kubelet[13888]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:55:02 newest-cni-704217 kubelet[13888]: E1222 01:55:02.208659   13888 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:55:02 newest-cni-704217 kubelet[13974]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:55:02 newest-cni-704217 kubelet[13974]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 01:55:02 newest-cni-704217 kubelet[13974]: E1222 01:55:02.933183   13974 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:55:02 newest-cni-704217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-704217 -n newest-cni-704217: exit status 2 (387.743271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-704217" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (253.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1222 02:00:09.774575 1881219 config.go:182] Loaded profile config "calico-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:00:47.970749 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:01:13.186048 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:01:13.191395 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:01:13.201630 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:01:13.221995 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:01:13.262380 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:01:13.342629 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:01:13.503664 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:01:13.824553 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:01:14.465163 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:01:15.746225 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:01:18.307210 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:01:23.427608 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:01:25.759284 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:01:33.668317 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:01:54.149508 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:02:01.692055 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/default-k8s-diff-port-733235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:02:03.284829 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:02:26.890344 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:02:35.110480 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1222 02:02:43.835601 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 2 (319.578815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-491054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-491054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.493µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-491054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-491054
helpers_test.go:244: (dbg) docker inspect no-preload-491054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	        "Created": "2025-12-22T01:32:55.262645623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2142290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:43:30.530329674Z",
	            "FinishedAt": "2025-12-22T01:43:29.176921862Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hostname",
	        "HostsPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/hosts",
	        "LogPath": "/var/lib/docker/containers/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe/313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe-json.log",
	        "Name": "/no-preload-491054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "313e04d6776eeda9c3efb35f7d85c207bbdb06fd62b352672c1b5e74fd18fbfe",
	                "LowerDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe-init/diff:/var/lib/docker/overlay2/408ceca30057c5d56300899e849d01cbb6321d6e5886297627594dba32125968/diff",
	                "MergedDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/919e18d51d3486cd0e212da5d02c1f8cec3c4b4e03c4d4c5336b76db451ceefe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-491054",
	                "Source": "/var/lib/docker/volumes/no-preload-491054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491054",
	                "name.minikube.sigs.k8s.io": "no-preload-491054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f5325c5c7084a0f9b6fab754824a17e882a8a3c300adc09d5d8d9446a681793",
	            "SandboxKey": "/var/run/docker/netns/4f5325c5c708",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38785"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38783"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:66:ff:3e:a4:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19fc401d8662d0cf8941c7bbec5156d3018f76638db0a4459b76764a03803463",
	                    "EndpointID": "44bb6686738bfe98627c492c33328f969aae00e65930a103715f4085574a78d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491054",
	                        "313e04d6776e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054: exit status 2 (313.516573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491054 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-678854 sudo systemctl status kubelet --all --full --no-pager                                                                                  │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl cat kubelet --no-pager                                                                                                  │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                   │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo cat /etc/kubernetes/kubelet.conf                                                                                                  │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo cat /var/lib/kubelet/config.yaml                                                                                                  │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl status docker --all --full --no-pager                                                                                   │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │                     │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl cat docker --no-pager                                                                                                   │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo cat /etc/docker/daemon.json                                                                                                       │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │                     │
	│ ssh     │ -p custom-flannel-678854 sudo docker system info                                                                                                                │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │                     │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl status cri-docker --all --full --no-pager                                                                               │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │                     │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl cat cri-docker --no-pager                                                                                               │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                          │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │                     │
	│ ssh     │ -p custom-flannel-678854 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                    │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo cri-dockerd --version                                                                                                             │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl status containerd --all --full --no-pager                                                                               │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │                     │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl cat containerd --no-pager                                                                                               │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo cat /lib/systemd/system/containerd.service                                                                                        │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo cat /etc/containerd/config.toml                                                                                                   │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo containerd config dump                                                                                                            │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl status crio --all --full --no-pager                                                                                     │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo systemctl cat crio --no-pager                                                                                                     │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                           │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ ssh     │ -p custom-flannel-678854 sudo crio config                                                                                                                       │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ delete  │ -p custom-flannel-678854                                                                                                                                        │ custom-flannel-678854     │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │ 22 Dec 25 02:02 UTC │
	│ start   │ -p enable-default-cni-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio │ enable-default-cni-678854 │ jenkins │ v1.37.0 │ 22 Dec 25 02:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 02:02:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 02:02:14.575173 2195564 out.go:360] Setting OutFile to fd 1 ...
	I1222 02:02:14.575579 2195564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 02:02:14.575620 2195564 out.go:374] Setting ErrFile to fd 2...
	I1222 02:02:14.575640 2195564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 02:02:14.575917 2195564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 02:02:14.576413 2195564 out.go:368] Setting JSON to false
	I1222 02:02:14.577269 2195564 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":117884,"bootTime":1766251051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 02:02:14.577402 2195564 start.go:143] virtualization:  
	I1222 02:02:14.582169 2195564 out.go:179] * [enable-default-cni-678854] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 02:02:14.587251 2195564 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 02:02:14.587326 2195564 notify.go:221] Checking for updates...
	I1222 02:02:14.596285 2195564 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 02:02:14.599916 2195564 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 02:02:14.603092 2195564 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 02:02:14.606139 2195564 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 02:02:14.609729 2195564 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 02:02:14.613429 2195564 config.go:182] Loaded profile config "no-preload-491054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 02:02:14.613600 2195564 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 02:02:14.642009 2195564 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 02:02:14.642127 2195564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 02:02:14.715672 2195564 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 02:02:14.706462941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 02:02:14.715775 2195564 docker.go:319] overlay module found
	I1222 02:02:14.720769 2195564 out.go:179] * Using the docker driver based on user configuration
	I1222 02:02:14.723598 2195564 start.go:309] selected driver: docker
	I1222 02:02:14.723615 2195564 start.go:928] validating driver "docker" against <nil>
	I1222 02:02:14.723642 2195564 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 02:02:14.724401 2195564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 02:02:14.777118 2195564 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 02:02:14.768252576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 02:02:14.777312 2195564 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	E1222 02:02:14.777643 2195564 start_flags.go:484] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1222 02:02:14.777679 2195564 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 02:02:14.780707 2195564 out.go:179] * Using Docker driver with root privileges
	I1222 02:02:14.783669 2195564 cni.go:84] Creating CNI manager for "bridge"
	I1222 02:02:14.783692 2195564 start_flags.go:338] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1222 02:02:14.783782 2195564 start.go:353] cluster config:
	{Name:enable-default-cni-678854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-678854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 02:02:14.788785 2195564 out.go:179] * Starting "enable-default-cni-678854" primary control-plane node in "enable-default-cni-678854" cluster
	I1222 02:02:14.791470 2195564 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 02:02:14.794344 2195564 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 02:02:14.797180 2195564 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 02:02:14.797234 2195564 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1222 02:02:14.797247 2195564 cache.go:65] Caching tarball of preloaded images
	I1222 02:02:14.797257 2195564 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 02:02:14.797395 2195564 preload.go:251] Found /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1222 02:02:14.797411 2195564 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1222 02:02:14.797529 2195564 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/config.json ...
	I1222 02:02:14.797552 2195564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/config.json: {Name:mk217de5f33e759d77b4f4ffc49ead72c2e43a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:14.816176 2195564 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 02:02:14.816201 2195564 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 02:02:14.816224 2195564 cache.go:243] Successfully downloaded all kic artifacts
	I1222 02:02:14.816261 2195564 start.go:360] acquireMachinesLock for enable-default-cni-678854: {Name:mk1dbf32004c6f8f055cc69ae41a241908570c59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 02:02:14.816381 2195564 start.go:364] duration metric: took 97.958µs to acquireMachinesLock for "enable-default-cni-678854"
	I1222 02:02:14.816413 2195564 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-678854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-678854 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 02:02:14.816482 2195564 start.go:125] createHost starting for "" (driver="docker")
	I1222 02:02:14.819911 2195564 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 02:02:14.820136 2195564 start.go:159] libmachine.API.Create for "enable-default-cni-678854" (driver="docker")
	I1222 02:02:14.820180 2195564 client.go:173] LocalClient.Create starting
	I1222 02:02:14.820304 2195564 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem
	I1222 02:02:14.820352 2195564 main.go:144] libmachine: Decoding PEM data...
	I1222 02:02:14.820378 2195564 main.go:144] libmachine: Parsing certificate...
	I1222 02:02:14.820435 2195564 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem
	I1222 02:02:14.820462 2195564 main.go:144] libmachine: Decoding PEM data...
	I1222 02:02:14.820477 2195564 main.go:144] libmachine: Parsing certificate...
	I1222 02:02:14.820847 2195564 cli_runner.go:164] Run: docker network inspect enable-default-cni-678854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 02:02:14.834866 2195564 cli_runner.go:211] docker network inspect enable-default-cni-678854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 02:02:14.834956 2195564 network_create.go:284] running [docker network inspect enable-default-cni-678854] to gather additional debugging logs...
	I1222 02:02:14.834983 2195564 cli_runner.go:164] Run: docker network inspect enable-default-cni-678854
	W1222 02:02:14.850849 2195564 cli_runner.go:211] docker network inspect enable-default-cni-678854 returned with exit code 1
	I1222 02:02:14.850881 2195564 network_create.go:287] error running [docker network inspect enable-default-cni-678854]: docker network inspect enable-default-cni-678854: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-678854 not found
	I1222 02:02:14.850895 2195564 network_create.go:289] output of [docker network inspect enable-default-cni-678854]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-678854 not found
	
	** /stderr **
	I1222 02:02:14.850991 2195564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 02:02:14.867077 2195564 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
	I1222 02:02:14.867438 2195564 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52decb35bc71 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:0b:c2:75:c2:5b} reservation:<nil>}
	I1222 02:02:14.867782 2195564 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c059d4129b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:9f:70:b4:1e:ec} reservation:<nil>}
	I1222 02:02:14.868046 2195564 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fc401d8662 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:05:63:31:92:ec} reservation:<nil>}
	I1222 02:02:14.868479 2195564 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3a1b0}
	I1222 02:02:14.868501 2195564 network_create.go:124] attempt to create docker network enable-default-cni-678854 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 02:02:14.868574 2195564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-678854 enable-default-cni-678854
	I1222 02:02:14.925645 2195564 network_create.go:108] docker network enable-default-cni-678854 192.168.85.0/24 created
	I1222 02:02:14.925681 2195564 kic.go:121] calculated static IP "192.168.85.2" for the "enable-default-cni-678854" container
	I1222 02:02:14.925755 2195564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 02:02:14.941645 2195564 cli_runner.go:164] Run: docker volume create enable-default-cni-678854 --label name.minikube.sigs.k8s.io=enable-default-cni-678854 --label created_by.minikube.sigs.k8s.io=true
	I1222 02:02:14.959801 2195564 oci.go:103] Successfully created a docker volume enable-default-cni-678854
	I1222 02:02:14.959891 2195564 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-678854-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-678854 --entrypoint /usr/bin/test -v enable-default-cni-678854:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 02:02:15.513763 2195564 oci.go:107] Successfully prepared a docker volume enable-default-cni-678854
	I1222 02:02:15.513832 2195564 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 02:02:15.513847 2195564 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 02:02:15.513921 2195564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-678854:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 02:02:19.356627 2195564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-678854:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.842660582s)
	I1222 02:02:19.356670 2195564 kic.go:203] duration metric: took 3.842819321s to extract preloaded images to volume ...
	W1222 02:02:19.356829 2195564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 02:02:19.356945 2195564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 02:02:19.410813 2195564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-678854 --name enable-default-cni-678854 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-678854 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-678854 --network enable-default-cni-678854 --ip 192.168.85.2 --volume enable-default-cni-678854:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 02:02:19.692999 2195564 cli_runner.go:164] Run: docker container inspect enable-default-cni-678854 --format={{.State.Running}}
	I1222 02:02:19.711626 2195564 cli_runner.go:164] Run: docker container inspect enable-default-cni-678854 --format={{.State.Status}}
	I1222 02:02:19.736241 2195564 cli_runner.go:164] Run: docker exec enable-default-cni-678854 stat /var/lib/dpkg/alternatives/iptables
	I1222 02:02:19.792899 2195564 oci.go:144] the created container "enable-default-cni-678854" has a running status.
	I1222 02:02:19.792933 2195564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa...
	I1222 02:02:19.939846 2195564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 02:02:19.967629 2195564 cli_runner.go:164] Run: docker container inspect enable-default-cni-678854 --format={{.State.Status}}
	I1222 02:02:19.989274 2195564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 02:02:19.989293 2195564 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-678854 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 02:02:20.048213 2195564 cli_runner.go:164] Run: docker container inspect enable-default-cni-678854 --format={{.State.Status}}
	I1222 02:02:20.078861 2195564 machine.go:94] provisionDockerMachine start ...
	I1222 02:02:20.078959 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:20.107240 2195564 main.go:144] libmachine: Using SSH client type: native
	I1222 02:02:20.107592 2195564 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38811 <nil> <nil>}
	I1222 02:02:20.107603 2195564 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 02:02:20.108354 2195564 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34710->127.0.0.1:38811: read: connection reset by peer
	I1222 02:02:23.240950 2195564 main.go:144] libmachine: SSH cmd err, output: <nil>: enable-default-cni-678854
	
	I1222 02:02:23.240978 2195564 ubuntu.go:182] provisioning hostname "enable-default-cni-678854"
	I1222 02:02:23.241065 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:23.258218 2195564 main.go:144] libmachine: Using SSH client type: native
	I1222 02:02:23.258528 2195564 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38811 <nil> <nil>}
	I1222 02:02:23.258544 2195564 main.go:144] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-678854 && echo "enable-default-cni-678854" | sudo tee /etc/hostname
	I1222 02:02:23.399380 2195564 main.go:144] libmachine: SSH cmd err, output: <nil>: enable-default-cni-678854
	
	I1222 02:02:23.399476 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:23.416759 2195564 main.go:144] libmachine: Using SSH client type: native
	I1222 02:02:23.417072 2195564 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38811 <nil> <nil>}
	I1222 02:02:23.417094 2195564 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-678854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-678854/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-678854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 02:02:23.550114 2195564 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 02:02:23.550142 2195564 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1874095/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1874095/.minikube}
	I1222 02:02:23.550162 2195564 ubuntu.go:190] setting up certificates
	I1222 02:02:23.550188 2195564 provision.go:84] configureAuth start
	I1222 02:02:23.550259 2195564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-678854
	I1222 02:02:23.567560 2195564 provision.go:143] copyHostCerts
	I1222 02:02:23.567636 2195564 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem, removing ...
	I1222 02:02:23.567646 2195564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem
	I1222 02:02:23.567732 2195564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.pem (1082 bytes)
	I1222 02:02:23.567836 2195564 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem, removing ...
	I1222 02:02:23.567846 2195564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem
	I1222 02:02:23.567874 2195564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/cert.pem (1123 bytes)
	I1222 02:02:23.567944 2195564 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem, removing ...
	I1222 02:02:23.567954 2195564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem
	I1222 02:02:23.567979 2195564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1874095/.minikube/key.pem (1679 bytes)
	I1222 02:02:23.568044 2195564 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-678854 san=[127.0.0.1 192.168.85.2 enable-default-cni-678854 localhost minikube]
	I1222 02:02:23.715643 2195564 provision.go:177] copyRemoteCerts
	I1222 02:02:23.715720 2195564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 02:02:23.715767 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:23.733435 2195564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38811 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa Username:docker}
	I1222 02:02:23.829010 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 02:02:23.845982 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 02:02:23.864027 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1222 02:02:23.882004 2195564 provision.go:87] duration metric: took 331.786146ms to configureAuth
	I1222 02:02:23.882032 2195564 ubuntu.go:206] setting minikube options for container-runtime
	I1222 02:02:23.882244 2195564 config.go:182] Loaded profile config "enable-default-cni-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 02:02:23.882353 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:23.899533 2195564 main.go:144] libmachine: Using SSH client type: native
	I1222 02:02:23.899856 2195564 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38811 <nil> <nil>}
	I1222 02:02:23.899877 2195564 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1222 02:02:24.194556 2195564 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1222 02:02:24.194580 2195564 machine.go:97] duration metric: took 4.115698353s to provisionDockerMachine
	I1222 02:02:24.194591 2195564 client.go:176] duration metric: took 9.374401101s to LocalClient.Create
	I1222 02:02:24.194604 2195564 start.go:167] duration metric: took 9.374470121s to libmachine.API.Create "enable-default-cni-678854"
	I1222 02:02:24.194613 2195564 start.go:293] postStartSetup for "enable-default-cni-678854" (driver="docker")
	I1222 02:02:24.194623 2195564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 02:02:24.194691 2195564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 02:02:24.194745 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:24.213762 2195564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38811 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa Username:docker}
	I1222 02:02:24.317616 2195564 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 02:02:24.320878 2195564 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 02:02:24.320912 2195564 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 02:02:24.320923 2195564 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/addons for local assets ...
	I1222 02:02:24.320976 2195564 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1874095/.minikube/files for local assets ...
	I1222 02:02:24.321056 2195564 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem -> 18812192.pem in /etc/ssl/certs
	I1222 02:02:24.321168 2195564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 02:02:24.328465 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 02:02:24.346461 2195564 start.go:296] duration metric: took 151.833553ms for postStartSetup
	I1222 02:02:24.346841 2195564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-678854
	I1222 02:02:24.367124 2195564 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/config.json ...
	I1222 02:02:24.367409 2195564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 02:02:24.367461 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:24.388437 2195564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38811 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa Username:docker}
	I1222 02:02:24.482701 2195564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 02:02:24.487487 2195564 start.go:128] duration metric: took 9.670990659s to createHost
	I1222 02:02:24.487512 2195564 start.go:83] releasing machines lock for "enable-default-cni-678854", held for 9.671117785s
	I1222 02:02:24.487589 2195564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-678854
	I1222 02:02:24.504119 2195564 ssh_runner.go:195] Run: cat /version.json
	I1222 02:02:24.504172 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:24.504437 2195564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 02:02:24.504489 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:24.527141 2195564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38811 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa Username:docker}
	I1222 02:02:24.543066 2195564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38811 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa Username:docker}
	I1222 02:02:24.629244 2195564 ssh_runner.go:195] Run: systemctl --version
	I1222 02:02:24.728681 2195564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1222 02:02:24.764074 2195564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 02:02:24.768481 2195564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 02:02:24.768553 2195564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 02:02:24.796346 2195564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 02:02:24.796428 2195564 start.go:496] detecting cgroup driver to use...
	I1222 02:02:24.796473 2195564 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 02:02:24.796557 2195564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 02:02:24.813745 2195564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 02:02:24.826885 2195564 docker.go:218] disabling cri-docker service (if available) ...
	I1222 02:02:24.826953 2195564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 02:02:24.844679 2195564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 02:02:24.863301 2195564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 02:02:24.988832 2195564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 02:02:25.127911 2195564 docker.go:234] disabling docker service ...
	I1222 02:02:25.128016 2195564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 02:02:25.157126 2195564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 02:02:25.172065 2195564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 02:02:25.284296 2195564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 02:02:25.398277 2195564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 02:02:25.411284 2195564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 02:02:25.425810 2195564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1222 02:02:25.425875 2195564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 02:02:25.434739 2195564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1222 02:02:25.434832 2195564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 02:02:25.443821 2195564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 02:02:25.453266 2195564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 02:02:25.465382 2195564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 02:02:25.473685 2195564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 02:02:25.483030 2195564 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 02:02:25.496816 2195564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1222 02:02:25.506455 2195564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 02:02:25.514609 2195564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 02:02:25.522047 2195564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 02:02:25.640986 2195564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1222 02:02:25.828441 2195564 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1222 02:02:25.828509 2195564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1222 02:02:25.832295 2195564 start.go:564] Will wait 60s for crictl version
	I1222 02:02:25.832353 2195564 ssh_runner.go:195] Run: which crictl
	I1222 02:02:25.835661 2195564 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 02:02:25.862662 2195564 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1222 02:02:25.862810 2195564 ssh_runner.go:195] Run: crio --version
	I1222 02:02:25.897173 2195564 ssh_runner.go:195] Run: crio --version
	I1222 02:02:25.932138 2195564 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1222 02:02:25.934885 2195564 cli_runner.go:164] Run: docker network inspect enable-default-cni-678854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 02:02:25.952574 2195564 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 02:02:25.956482 2195564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 02:02:25.966684 2195564 kubeadm.go:884] updating cluster {Name:enable-default-cni-678854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-678854 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 02:02:25.966795 2195564 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1222 02:02:25.966856 2195564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 02:02:26.004295 2195564 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 02:02:26.004335 2195564 crio.go:433] Images already preloaded, skipping extraction
	I1222 02:02:26.004429 2195564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 02:02:26.033762 2195564 crio.go:514] all images are preloaded for cri-o runtime.
	I1222 02:02:26.033787 2195564 cache_images.go:86] Images are preloaded, skipping loading
	I1222 02:02:26.033796 2195564 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1222 02:02:26.033891 2195564 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=enable-default-cni-678854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-678854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1222 02:02:26.033982 2195564 ssh_runner.go:195] Run: crio config
	I1222 02:02:26.103396 2195564 cni.go:84] Creating CNI manager for "bridge"
	I1222 02:02:26.103435 2195564 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 02:02:26.103482 2195564 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-678854 NodeName:enable-default-cni-678854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 02:02:26.103685 2195564 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-678854"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 02:02:26.103768 2195564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 02:02:26.111719 2195564 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 02:02:26.111832 2195564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 02:02:26.119211 2195564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1222 02:02:26.132313 2195564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 02:02:26.144555 2195564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1222 02:02:26.156835 2195564 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 02:02:26.160380 2195564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 02:02:26.169478 2195564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 02:02:26.280422 2195564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 02:02:26.296825 2195564 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854 for IP: 192.168.85.2
	I1222 02:02:26.296847 2195564 certs.go:195] generating shared ca certs ...
	I1222 02:02:26.296862 2195564 certs.go:227] acquiring lock for ca certs: {Name:mk38d29da14276ba47ca1cb700d074620e3d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:26.297000 2195564 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key
	I1222 02:02:26.297051 2195564 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key
	I1222 02:02:26.297064 2195564 certs.go:257] generating profile certs ...
	I1222 02:02:26.297128 2195564 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/client.key
	I1222 02:02:26.297146 2195564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/client.crt with IP's: []
	I1222 02:02:26.709500 2195564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/client.crt ...
	I1222 02:02:26.709530 2195564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/client.crt: {Name:mkcf20b13115752d5db505d29cd9563dd8087460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:26.709737 2195564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/client.key ...
	I1222 02:02:26.709753 2195564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/client.key: {Name:mkb3453992e2b4fd7b72f78bfa3f0bf810e09d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:26.709850 2195564 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.key.8100d043
	I1222 02:02:26.709868 2195564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.crt.8100d043 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 02:02:26.920525 2195564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.crt.8100d043 ...
	I1222 02:02:26.920558 2195564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.crt.8100d043: {Name:mk467d7e935fd7498a566e4cbf4dfd3f8079c2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:26.920752 2195564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.key.8100d043 ...
	I1222 02:02:26.920768 2195564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.key.8100d043: {Name:mkc8bec86a565cba69eb0da9e90141d87ff628b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:26.920856 2195564 certs.go:382] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.crt.8100d043 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.crt
	I1222 02:02:26.920943 2195564 certs.go:386] copying /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.key.8100d043 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.key
	I1222 02:02:26.921002 2195564 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/proxy-client.key
	I1222 02:02:26.921026 2195564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/proxy-client.crt with IP's: []
	I1222 02:02:27.317981 2195564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/proxy-client.crt ...
	I1222 02:02:27.318013 2195564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/proxy-client.crt: {Name:mk8895136b2479a336b72257d6db4d63a5cf4473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:27.318198 2195564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/proxy-client.key ...
	I1222 02:02:27.318215 2195564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/proxy-client.key: {Name:mk012105a3cb53cf48dae6db16b99f60abffbf6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:27.318436 2195564 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem (1338 bytes)
	W1222 02:02:27.318482 2195564 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219_empty.pem, impossibly tiny 0 bytes
	I1222 02:02:27.318495 2195564 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca-key.pem (1679 bytes)
	I1222 02:02:27.318520 2195564 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/ca.pem (1082 bytes)
	I1222 02:02:27.318551 2195564 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/cert.pem (1123 bytes)
	I1222 02:02:27.318577 2195564 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/key.pem (1679 bytes)
	I1222 02:02:27.318628 2195564 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem (1708 bytes)
	I1222 02:02:27.319205 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 02:02:27.337561 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 02:02:27.359738 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 02:02:27.382593 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 02:02:27.401997 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1222 02:02:27.421682 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 02:02:27.441849 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 02:02:27.459536 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/enable-default-cni-678854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 02:02:27.476586 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/certs/1881219.pem --> /usr/share/ca-certificates/1881219.pem (1338 bytes)
	I1222 02:02:27.493983 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/ssl/certs/18812192.pem --> /usr/share/ca-certificates/18812192.pem (1708 bytes)
	I1222 02:02:27.512105 2195564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1874095/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 02:02:27.531729 2195564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 02:02:27.545883 2195564 ssh_runner.go:195] Run: openssl version
	I1222 02:02:27.552462 2195564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/18812192.pem
	I1222 02:02:27.560010 2195564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/18812192.pem /etc/ssl/certs/18812192.pem
	I1222 02:02:27.567759 2195564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18812192.pem
	I1222 02:02:27.571497 2195564 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:18 /usr/share/ca-certificates/18812192.pem
	I1222 02:02:27.571561 2195564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18812192.pem
	I1222 02:02:27.613530 2195564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 02:02:27.620841 2195564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/18812192.pem /etc/ssl/certs/3ec20f2e.0
	I1222 02:02:27.628130 2195564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 02:02:27.635108 2195564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 02:02:27.642356 2195564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 02:02:27.646016 2195564 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I1222 02:02:27.646093 2195564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 02:02:27.686860 2195564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 02:02:27.694425 2195564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 02:02:27.702290 2195564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1881219.pem
	I1222 02:02:27.710034 2195564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1881219.pem /etc/ssl/certs/1881219.pem
	I1222 02:02:27.717600 2195564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1881219.pem
	I1222 02:02:27.721115 2195564 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:18 /usr/share/ca-certificates/1881219.pem
	I1222 02:02:27.721180 2195564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1881219.pem
	I1222 02:02:27.762266 2195564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 02:02:27.769698 2195564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1881219.pem /etc/ssl/certs/51391683.0
	I1222 02:02:27.777136 2195564 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 02:02:27.780667 2195564 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 02:02:27.780716 2195564 kubeadm.go:401] StartCluster: {Name:enable-default-cni-678854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-678854 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 02:02:27.780787 2195564 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1222 02:02:27.780846 2195564 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 02:02:27.809943 2195564 cri.go:96] found id: ""
	I1222 02:02:27.810024 2195564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 02:02:27.817720 2195564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 02:02:27.825618 2195564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 02:02:27.825703 2195564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 02:02:27.833475 2195564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 02:02:27.833544 2195564 kubeadm.go:158] found existing configuration files:
	
	I1222 02:02:27.833603 2195564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 02:02:27.841461 2195564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 02:02:27.841528 2195564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 02:02:27.849133 2195564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 02:02:27.857705 2195564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 02:02:27.857771 2195564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 02:02:27.866636 2195564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 02:02:27.875781 2195564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 02:02:27.875916 2195564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 02:02:27.884226 2195564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 02:02:27.893747 2195564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 02:02:27.893865 2195564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 02:02:27.902349 2195564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 02:02:27.943506 2195564 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 02:02:27.943634 2195564 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 02:02:27.966962 2195564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 02:02:27.967125 2195564 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 02:02:27.967201 2195564 kubeadm.go:319] OS: Linux
	I1222 02:02:27.967285 2195564 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 02:02:27.967369 2195564 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 02:02:27.967448 2195564 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 02:02:27.967529 2195564 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 02:02:27.967620 2195564 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 02:02:27.967723 2195564 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 02:02:27.967804 2195564 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 02:02:27.967880 2195564 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 02:02:27.967962 2195564 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 02:02:28.043103 2195564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 02:02:28.043284 2195564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 02:02:28.043417 2195564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 02:02:28.051254 2195564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 02:02:28.058181 2195564 out.go:252]   - Generating certificates and keys ...
	I1222 02:02:28.058365 2195564 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 02:02:28.058487 2195564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 02:02:28.153894 2195564 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 02:02:28.372879 2195564 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 02:02:28.619454 2195564 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 02:02:29.184410 2195564 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 02:02:29.680025 2195564 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 02:02:29.680244 2195564 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-678854 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 02:02:30.199378 2195564 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 02:02:30.199538 2195564 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-678854 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 02:02:30.417552 2195564 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 02:02:30.771493 2195564 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 02:02:31.246923 2195564 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 02:02:31.247284 2195564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 02:02:31.941958 2195564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 02:02:33.600767 2195564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 02:02:33.816208 2195564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 02:02:33.896008 2195564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 02:02:33.952413 2195564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 02:02:33.953042 2195564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 02:02:33.955692 2195564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 02:02:33.959392 2195564 out.go:252]   - Booting up control plane ...
	I1222 02:02:33.959513 2195564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 02:02:33.959619 2195564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 02:02:33.959711 2195564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 02:02:33.979116 2195564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 02:02:33.979230 2195564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 02:02:33.986550 2195564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 02:02:33.986949 2195564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 02:02:33.987007 2195564 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 02:02:34.124906 2195564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 02:02:34.125026 2195564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 02:02:35.129693 2195564 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001820193s
	I1222 02:02:35.130719 2195564 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 02:02:35.131074 2195564 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1222 02:02:35.131723 2195564 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 02:02:35.132423 2195564 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1222 02:02:37.354044 2195564 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.221497382s
	I1222 02:02:39.808630 2195564 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.675736202s
	I1222 02:02:41.632828 2195564 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501300416s
	I1222 02:02:41.671424 2195564 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 02:02:41.695221 2195564 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 02:02:41.712925 2195564 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 02:02:41.713136 2195564 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-678854 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 02:02:41.729102 2195564 kubeadm.go:319] [bootstrap-token] Using token: 3l8gj9.wepe6q59if3bl8hu
	I1222 02:02:41.732110 2195564 out.go:252]   - Configuring RBAC rules ...
	I1222 02:02:41.732236 2195564 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 02:02:41.736099 2195564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 02:02:41.747078 2195564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 02:02:41.750918 2195564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 02:02:41.755723 2195564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 02:02:41.762077 2195564 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 02:02:42.040161 2195564 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 02:02:42.466550 2195564 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 02:02:43.039890 2195564 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 02:02:43.041207 2195564 kubeadm.go:319] 
	I1222 02:02:43.041279 2195564 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 02:02:43.041284 2195564 kubeadm.go:319] 
	I1222 02:02:43.041385 2195564 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 02:02:43.041391 2195564 kubeadm.go:319] 
	I1222 02:02:43.041416 2195564 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 02:02:43.041475 2195564 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 02:02:43.041524 2195564 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 02:02:43.041529 2195564 kubeadm.go:319] 
	I1222 02:02:43.041582 2195564 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 02:02:43.041586 2195564 kubeadm.go:319] 
	I1222 02:02:43.041634 2195564 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 02:02:43.041637 2195564 kubeadm.go:319] 
	I1222 02:02:43.041690 2195564 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 02:02:43.041772 2195564 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 02:02:43.041840 2195564 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 02:02:43.041844 2195564 kubeadm.go:319] 
	I1222 02:02:43.041928 2195564 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 02:02:43.042004 2195564 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 02:02:43.042008 2195564 kubeadm.go:319] 
	I1222 02:02:43.042091 2195564 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3l8gj9.wepe6q59if3bl8hu \
	I1222 02:02:43.042201 2195564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d \
	I1222 02:02:43.042222 2195564 kubeadm.go:319] 	--control-plane 
	I1222 02:02:43.042226 2195564 kubeadm.go:319] 
	I1222 02:02:43.042310 2195564 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 02:02:43.042314 2195564 kubeadm.go:319] 
	I1222 02:02:43.042395 2195564 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3l8gj9.wepe6q59if3bl8hu \
	I1222 02:02:43.042497 2195564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26289418be2a32791e7c992883cca0ff842fc0f750d1af24ce9791a9c2754f2d 
	I1222 02:02:43.045683 2195564 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 02:02:43.045920 2195564 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 02:02:43.046030 2195564 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 02:02:43.046048 2195564 cni.go:84] Creating CNI manager for "bridge"
	I1222 02:02:43.051228 2195564 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1222 02:02:43.054243 2195564 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1222 02:02:43.063585 2195564 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1222 02:02:43.078287 2195564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 02:02:43.078403 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:43.078477 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-678854 minikube.k8s.io/updated_at=2025_12_22T02_02_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=enable-default-cni-678854 minikube.k8s.io/primary=true
	I1222 02:02:43.118438 2195564 ops.go:34] apiserver oom_adj: -16
	I1222 02:02:43.204579 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:43.705205 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:44.205299 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:44.705163 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:45.205696 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:45.705125 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:46.205123 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:46.705065 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:47.204742 2195564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 02:02:47.293252 2195564 kubeadm.go:1114] duration metric: took 4.214895926s to wait for elevateKubeSystemPrivileges
	I1222 02:02:47.293277 2195564 kubeadm.go:403] duration metric: took 19.512564234s to StartCluster
	I1222 02:02:47.293293 2195564 settings.go:142] acquiring lock: {Name:mk3ac24f7ecbd502c4725d5b26f112d20f98d87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:47.293384 2195564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 02:02:47.294339 2195564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/kubeconfig: {Name:mk474396bf0931217dddd21c1a5be03103577d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 02:02:47.294592 2195564 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1222 02:02:47.294617 2195564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 02:02:47.294929 2195564 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 02:02:47.295041 2195564 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-678854"
	I1222 02:02:47.295069 2195564 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-678854"
	I1222 02:02:47.295102 2195564 config.go:182] Loaded profile config "enable-default-cni-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 02:02:47.295122 2195564 host.go:66] Checking if "enable-default-cni-678854" exists ...
	I1222 02:02:47.295160 2195564 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-678854"
	I1222 02:02:47.295173 2195564 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-678854"
	I1222 02:02:47.295532 2195564 cli_runner.go:164] Run: docker container inspect enable-default-cni-678854 --format={{.State.Status}}
	I1222 02:02:47.295644 2195564 cli_runner.go:164] Run: docker container inspect enable-default-cni-678854 --format={{.State.Status}}
	I1222 02:02:47.298633 2195564 out.go:179] * Verifying Kubernetes components...
	I1222 02:02:47.301529 2195564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 02:02:47.335364 2195564 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-678854"
	I1222 02:02:47.335404 2195564 host.go:66] Checking if "enable-default-cni-678854" exists ...
	I1222 02:02:47.341795 2195564 cli_runner.go:164] Run: docker container inspect enable-default-cni-678854 --format={{.State.Status}}
	I1222 02:02:47.345963 2195564 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 02:02:47.349506 2195564 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 02:02:47.349534 2195564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 02:02:47.349611 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:47.368223 2195564 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 02:02:47.368246 2195564 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 02:02:47.368333 2195564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-678854
	I1222 02:02:47.393138 2195564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38811 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa Username:docker}
	I1222 02:02:47.403597 2195564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38811 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/enable-default-cni-678854/id_rsa Username:docker}
	I1222 02:02:47.515096 2195564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 02:02:47.630067 2195564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 02:02:47.659483 2195564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 02:02:47.737623 2195564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 02:02:48.154936 2195564 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1222 02:02:48.156958 2195564 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-678854" to be "Ready" ...
	I1222 02:02:48.210531 2195564 node_ready.go:49] node "enable-default-cni-678854" is "Ready"
	I1222 02:02:48.210659 2195564 node_ready.go:38] duration metric: took 53.571142ms for node "enable-default-cni-678854" to be "Ready" ...
	I1222 02:02:48.210682 2195564 api_server.go:52] waiting for apiserver process to appear ...
	I1222 02:02:48.210795 2195564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 02:02:48.667247 2195564 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-678854" context rescaled to 1 replicas
	I1222 02:02:48.673864 2195564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014298979s)
	I1222 02:02:48.674046 2195564 api_server.go:72] duration metric: took 1.379427622s to wait for apiserver process to appear ...
	I1222 02:02:48.674233 2195564 api_server.go:88] waiting for apiserver healthz status ...
	I1222 02:02:48.674270 2195564 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 02:02:48.688299 2195564 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1222 02:02:48.696288 2195564 api_server.go:141] control plane version: v1.34.3
	I1222 02:02:48.696327 2195564 api_server.go:131] duration metric: took 22.062296ms to wait for apiserver health ...
	I1222 02:02:48.696336 2195564 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 02:02:48.703345 2195564 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1222 02:02:48.706789 2195564 addons.go:530] duration metric: took 1.411863202s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1222 02:02:48.707026 2195564 system_pods.go:59] 8 kube-system pods found
	I1222 02:02:48.707056 2195564 system_pods.go:61] "coredns-66bc5c9577-br8fm" [c505c980-6f3c-4ad9-8f43-52c971614109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:48.707088 2195564 system_pods.go:61] "coredns-66bc5c9577-vhvb8" [c94c6faa-9d78-4aa9-8fbc-b28681b76a84] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:48.707101 2195564 system_pods.go:61] "etcd-enable-default-cni-678854" [d63db8bb-2ae5-43ce-a918-5fe1ec8ea02c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 02:02:48.707108 2195564 system_pods.go:61] "kube-apiserver-enable-default-cni-678854" [f1d5e8e0-d79c-4ccc-8f3b-01a49e0bd00d] Running
	I1222 02:02:48.707119 2195564 system_pods.go:61] "kube-controller-manager-enable-default-cni-678854" [8b512474-e147-4e74-8ba5-9274258e1d0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 02:02:48.707124 2195564 system_pods.go:61] "kube-proxy-pn8wq" [6a8e4c51-10ce-4966-869d-1c8a0b6242f9] Running
	I1222 02:02:48.707131 2195564 system_pods.go:61] "kube-scheduler-enable-default-cni-678854" [017d7be1-3d3f-4f42-9505-2b26bda70a35] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 02:02:48.707161 2195564 system_pods.go:61] "storage-provisioner" [86114c84-202c-4a0f-9236-185585be3ac2] Pending
	I1222 02:02:48.707174 2195564 system_pods.go:74] duration metric: took 10.831355ms to wait for pod list to return data ...
	I1222 02:02:48.707181 2195564 default_sa.go:34] waiting for default service account to be created ...
	I1222 02:02:48.715537 2195564 default_sa.go:45] found service account: "default"
	I1222 02:02:48.715575 2195564 default_sa.go:55] duration metric: took 8.383387ms for default service account to be created ...
	I1222 02:02:48.715586 2195564 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 02:02:48.723004 2195564 system_pods.go:86] 8 kube-system pods found
	I1222 02:02:48.723044 2195564 system_pods.go:89] "coredns-66bc5c9577-br8fm" [c505c980-6f3c-4ad9-8f43-52c971614109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:48.723054 2195564 system_pods.go:89] "coredns-66bc5c9577-vhvb8" [c94c6faa-9d78-4aa9-8fbc-b28681b76a84] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:48.723088 2195564 system_pods.go:89] "etcd-enable-default-cni-678854" [d63db8bb-2ae5-43ce-a918-5fe1ec8ea02c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 02:02:48.723102 2195564 system_pods.go:89] "kube-apiserver-enable-default-cni-678854" [f1d5e8e0-d79c-4ccc-8f3b-01a49e0bd00d] Running
	I1222 02:02:48.723115 2195564 system_pods.go:89] "kube-controller-manager-enable-default-cni-678854" [8b512474-e147-4e74-8ba5-9274258e1d0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 02:02:48.723129 2195564 system_pods.go:89] "kube-proxy-pn8wq" [6a8e4c51-10ce-4966-869d-1c8a0b6242f9] Running
	I1222 02:02:48.723136 2195564 system_pods.go:89] "kube-scheduler-enable-default-cni-678854" [017d7be1-3d3f-4f42-9505-2b26bda70a35] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 02:02:48.723144 2195564 system_pods.go:89] "storage-provisioner" [86114c84-202c-4a0f-9236-185585be3ac2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 02:02:48.723198 2195564 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1222 02:02:48.996228 2195564 system_pods.go:86] 8 kube-system pods found
	I1222 02:02:48.996268 2195564 system_pods.go:89] "coredns-66bc5c9577-br8fm" [c505c980-6f3c-4ad9-8f43-52c971614109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:48.996277 2195564 system_pods.go:89] "coredns-66bc5c9577-vhvb8" [c94c6faa-9d78-4aa9-8fbc-b28681b76a84] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:48.996285 2195564 system_pods.go:89] "etcd-enable-default-cni-678854" [d63db8bb-2ae5-43ce-a918-5fe1ec8ea02c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 02:02:48.996291 2195564 system_pods.go:89] "kube-apiserver-enable-default-cni-678854" [f1d5e8e0-d79c-4ccc-8f3b-01a49e0bd00d] Running
	I1222 02:02:48.996300 2195564 system_pods.go:89] "kube-controller-manager-enable-default-cni-678854" [8b512474-e147-4e74-8ba5-9274258e1d0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 02:02:48.996304 2195564 system_pods.go:89] "kube-proxy-pn8wq" [6a8e4c51-10ce-4966-869d-1c8a0b6242f9] Running
	I1222 02:02:48.996317 2195564 system_pods.go:89] "kube-scheduler-enable-default-cni-678854" [017d7be1-3d3f-4f42-9505-2b26bda70a35] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 02:02:48.996327 2195564 system_pods.go:89] "storage-provisioner" [86114c84-202c-4a0f-9236-185585be3ac2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 02:02:49.358645 2195564 system_pods.go:86] 8 kube-system pods found
	I1222 02:02:49.358682 2195564 system_pods.go:89] "coredns-66bc5c9577-br8fm" [c505c980-6f3c-4ad9-8f43-52c971614109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:49.358690 2195564 system_pods.go:89] "coredns-66bc5c9577-vhvb8" [c94c6faa-9d78-4aa9-8fbc-b28681b76a84] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:49.358717 2195564 system_pods.go:89] "etcd-enable-default-cni-678854" [d63db8bb-2ae5-43ce-a918-5fe1ec8ea02c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 02:02:49.358727 2195564 system_pods.go:89] "kube-apiserver-enable-default-cni-678854" [f1d5e8e0-d79c-4ccc-8f3b-01a49e0bd00d] Running
	I1222 02:02:49.358735 2195564 system_pods.go:89] "kube-controller-manager-enable-default-cni-678854" [8b512474-e147-4e74-8ba5-9274258e1d0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 02:02:49.358744 2195564 system_pods.go:89] "kube-proxy-pn8wq" [6a8e4c51-10ce-4966-869d-1c8a0b6242f9] Running
	I1222 02:02:49.358751 2195564 system_pods.go:89] "kube-scheduler-enable-default-cni-678854" [017d7be1-3d3f-4f42-9505-2b26bda70a35] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 02:02:49.358756 2195564 system_pods.go:89] "storage-provisioner" [86114c84-202c-4a0f-9236-185585be3ac2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 02:02:49.673682 2195564 system_pods.go:86] 8 kube-system pods found
	I1222 02:02:49.673722 2195564 system_pods.go:89] "coredns-66bc5c9577-br8fm" [c505c980-6f3c-4ad9-8f43-52c971614109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:49.673732 2195564 system_pods.go:89] "coredns-66bc5c9577-vhvb8" [c94c6faa-9d78-4aa9-8fbc-b28681b76a84] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 02:02:49.673739 2195564 system_pods.go:89] "etcd-enable-default-cni-678854" [d63db8bb-2ae5-43ce-a918-5fe1ec8ea02c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 02:02:49.673744 2195564 system_pods.go:89] "kube-apiserver-enable-default-cni-678854" [f1d5e8e0-d79c-4ccc-8f3b-01a49e0bd00d] Running
	I1222 02:02:49.673753 2195564 system_pods.go:89] "kube-controller-manager-enable-default-cni-678854" [8b512474-e147-4e74-8ba5-9274258e1d0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 02:02:49.673761 2195564 system_pods.go:89] "kube-proxy-pn8wq" [6a8e4c51-10ce-4966-869d-1c8a0b6242f9] Running
	I1222 02:02:49.673778 2195564 system_pods.go:89] "kube-scheduler-enable-default-cni-678854" [017d7be1-3d3f-4f42-9505-2b26bda70a35] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 02:02:49.673788 2195564 system_pods.go:89] "storage-provisioner" [86114c84-202c-4a0f-9236-185585be3ac2] Running
	I1222 02:02:49.673805 2195564 system_pods.go:126] duration metric: took 958.188082ms to wait for k8s-apps to be running ...
	I1222 02:02:49.673816 2195564 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 02:02:49.673877 2195564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 02:02:49.687312 2195564 system_svc.go:56] duration metric: took 13.486717ms WaitForService to wait for kubelet
	I1222 02:02:49.687340 2195564 kubeadm.go:587] duration metric: took 2.392723209s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 02:02:49.687370 2195564 node_conditions.go:102] verifying NodePressure condition ...
	I1222 02:02:49.691084 2195564 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 02:02:49.691114 2195564 node_conditions.go:123] node cpu capacity is 2
	I1222 02:02:49.691128 2195564 node_conditions.go:105] duration metric: took 3.75279ms to run NodePressure ...
	I1222 02:02:49.691159 2195564 start.go:242] waiting for startup goroutines ...
	I1222 02:02:49.691177 2195564 start.go:247] waiting for cluster config update ...
	I1222 02:02:49.691189 2195564 start.go:256] writing updated cluster config ...
	I1222 02:02:49.691468 2195564 ssh_runner.go:195] Run: rm -f paused
	I1222 02:02:49.695336 2195564 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 02:02:49.698847 2195564 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-br8fm" in "kube-system" namespace to be "Ready" or be gone ...
	W1222 02:02:51.705478 2195564 pod_ready.go:104] pod "coredns-66bc5c9577-br8fm" is not "Ready", error: <nil>
	W1222 02:02:54.206147 2195564 pod_ready.go:104] pod "coredns-66bc5c9577-br8fm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361841003Z" level=info msg="Using the internal default seccomp profile"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361848683Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.36185418Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361860416Z" level=info msg="RDT not available in the host system"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.361873446Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362571854Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362593843Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.362610154Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363331199Z" level=info msg="Conmon does support the --sync option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363355166Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.363489102Z" level=info msg="Updated default CNI network name to "
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364161575Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364588492Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.364649191Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426273648Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426307715Z" level=info msg="Starting seccomp notifier watcher"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426348477Z" level=info msg="Create NRI interface"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426482331Z" level=info msg="built-in NRI default validator is disabled"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426498232Z" level=info msg="runtime interface created"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426511188Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426517128Z" level=info msg="runtime interface starting up..."
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426522773Z" level=info msg="starting plugins..."
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426536213Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:43:36 no-preload-491054 crio[615]: time="2025-12-22T01:43:36.426597914Z" level=info msg="No systemd watchdog enabled"
	Dec 22 01:43:36 no-preload-491054 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 02:02:55.262670   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 02:02:55.263330   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 02:02:55.264878   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 02:02:55.265203   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 02:02:55.266719   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.235956] overlayfs: idmapped layers are currently not supported
	[Dec22 01:11] overlayfs: idmapped layers are currently not supported
	[Dec22 01:12] overlayfs: idmapped layers are currently not supported
	[Dec22 01:14] overlayfs: idmapped layers are currently not supported
	[Dec22 01:15] overlayfs: idmapped layers are currently not supported
	[Dec22 01:26] overlayfs: idmapped layers are currently not supported
	[Dec22 01:27] overlayfs: idmapped layers are currently not supported
	[Dec22 01:28] overlayfs: idmapped layers are currently not supported
	[Dec22 01:29] overlayfs: idmapped layers are currently not supported
	[ +22.995861] overlayfs: idmapped layers are currently not supported
	[Dec22 01:30] overlayfs: idmapped layers are currently not supported
	[ +35.878071] overlayfs: idmapped layers are currently not supported
	[Dec22 01:31] overlayfs: idmapped layers are currently not supported
	[Dec22 01:32] overlayfs: idmapped layers are currently not supported
	[Dec22 01:33] overlayfs: idmapped layers are currently not supported
	[Dec22 01:34] overlayfs: idmapped layers are currently not supported
	[Dec22 01:35] overlayfs: idmapped layers are currently not supported
	[Dec22 01:36] overlayfs: idmapped layers are currently not supported
	[Dec22 01:37] overlayfs: idmapped layers are currently not supported
	[Dec22 01:55] overlayfs: idmapped layers are currently not supported
	[Dec22 01:57] overlayfs: idmapped layers are currently not supported
	[Dec22 01:59] overlayfs: idmapped layers are currently not supported
	[ +24.432266] kauditd_printk_skb: 8 callbacks suppressed
	[Dec22 02:01] overlayfs: idmapped layers are currently not supported
	[Dec22 02:02] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:02:55 up 1 day,  8:45,  0 user,  load average: 2.03, 1.84, 1.56
	Linux no-preload-491054 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 02:02:52 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 02:02:53 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1540.
	Dec 22 02:02:53 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 02:02:53 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 02:02:53 no-preload-491054 kubelet[10093]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 02:02:53 no-preload-491054 kubelet[10093]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 02:02:53 no-preload-491054 kubelet[10093]: E1222 02:02:53.647160   10093 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 02:02:53 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 02:02:53 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 02:02:54 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1541.
	Dec 22 02:02:54 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 02:02:54 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 02:02:54 no-preload-491054 kubelet[10114]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 02:02:54 no-preload-491054 kubelet[10114]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 02:02:54 no-preload-491054 kubelet[10114]: E1222 02:02:54.419690   10114 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 02:02:54 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 02:02:54 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 02:02:55 no-preload-491054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1542.
	Dec 22 02:02:55 no-preload-491054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 02:02:55 no-preload-491054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 02:02:55 no-preload-491054 kubelet[10203]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 02:02:55 no-preload-491054 kubelet[10203]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 22 02:02:55 no-preload-491054 kubelet[10203]: E1222 02:02:55.160269   10203 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 02:02:55 no-preload-491054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 02:02:55 no-preload-491054 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491054 -n no-preload-491054: exit status 2 (327.802839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-491054" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (253.04s)
E1222 02:04:40.214566 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:40.219949 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:40.230315 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:40.250669 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:40.291742 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:40.371989 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:40.532265 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:40.853098 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:41.493928 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:42.774175 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:45.334490 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:04:50.454757 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (322/416)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.23
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 4.46
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.28
18 TestDownloadOnly/v1.34.3/DeleteAll 0.4
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.24
21 TestDownloadOnly/v1.35.0-rc.1/json-events 5.44
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.59
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
36 TestAddons/Setup 114.79
40 TestAddons/serial/GCPAuth/Namespaces 0.23
41 TestAddons/serial/GCPAuth/FakeCredentials 9.86
57 TestAddons/StoppedEnableDisable 12.39
58 TestCertOptions 37.5
59 TestCertExpiration 240.88
61 TestForceSystemdFlag 33.73
62 TestForceSystemdEnv 35.19
67 TestErrorSpam/setup 31.59
68 TestErrorSpam/start 0.81
69 TestErrorSpam/status 1.07
70 TestErrorSpam/pause 6.14
71 TestErrorSpam/unpause 6.05
72 TestErrorSpam/stop 1.52
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 51.07
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 29.44
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.86
84 TestFunctional/serial/CacheCmd/cache/add_local 1.34
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.18
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 28.39
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.46
95 TestFunctional/serial/LogsFileCmd 1.49
96 TestFunctional/serial/InvalidService 4.22
98 TestFunctional/parallel/ConfigCmd 0.52
99 TestFunctional/parallel/DashboardCmd 13.71
100 TestFunctional/parallel/DryRun 0.56
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.35
106 TestFunctional/parallel/ServiceCmdConnect 7.81
107 TestFunctional/parallel/AddonsCmd 0.13
108 TestFunctional/parallel/PersistentVolumeClaim 18.73
110 TestFunctional/parallel/SSHCmd 0.73
111 TestFunctional/parallel/CpCmd 2.37
113 TestFunctional/parallel/FileSync 0.27
114 TestFunctional/parallel/CertSync 1.86
118 TestFunctional/parallel/NodeLabels 0.11
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
122 TestFunctional/parallel/License 0.32
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.4
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
138 TestFunctional/parallel/MountCmd/any-port 7.34
139 TestFunctional/parallel/ServiceCmd/List 0.54
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
142 TestFunctional/parallel/ServiceCmd/Format 0.44
143 TestFunctional/parallel/ServiceCmd/URL 0.51
144 TestFunctional/parallel/MountCmd/specific-port 2.28
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
146 TestFunctional/parallel/Version/short 0.1
147 TestFunctional/parallel/Version/components 0.9
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.1
153 TestFunctional/parallel/ImageCommands/Setup 0.62
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.68
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.09
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.44
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.71
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.91
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 0.96
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.45
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.7
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.22
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.73
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 2.04
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.33
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 2.1
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.74
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.3
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.05
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.5
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.24
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.21
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.22
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.24
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.6
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.26
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.5
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1.07
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.34
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.16
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.16
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.14
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.41
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.7
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.9
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.53
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.4
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.37
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.4
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 2.08
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.22
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.03
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 169.41
265 TestMultiControlPlane/serial/DeployApp 6.73
266 TestMultiControlPlane/serial/PingHostFromPods 1.54
267 TestMultiControlPlane/serial/AddWorkerNode 31.96
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
270 TestMultiControlPlane/serial/CopyFile 19.31
271 TestMultiControlPlane/serial/StopSecondaryNode 12.84
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
273 TestMultiControlPlane/serial/RestartSecondaryNode 19.56
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 132.03
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.21
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
278 TestMultiControlPlane/serial/StopCluster 35.97
279 TestMultiControlPlane/serial/RestartCluster 82.04
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
281 TestMultiControlPlane/serial/AddSecondaryNode 86.33
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.06
287 TestJSONOutput/start/Command 50.13
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.86
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.23
312 TestKicCustomNetwork/create_custom_network 40.33
313 TestKicCustomNetwork/use_default_bridge_network 34.86
314 TestKicExistingNetwork 32.62
315 TestKicCustomSubnet 36.87
316 TestKicStaticIP 33.76
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 72.44
321 TestMountStart/serial/StartWithMountFirst 9.11
322 TestMountStart/serial/VerifyMountFirst 0.26
323 TestMountStart/serial/StartWithMountSecond 8.71
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.7
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 8.03
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 78.19
333 TestMultiNode/serial/DeployApp2Nodes 4.47
334 TestMultiNode/serial/PingHostFrom2Pods 0.91
335 TestMultiNode/serial/AddNode 29.24
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.69
338 TestMultiNode/serial/CopyFile 10.5
339 TestMultiNode/serial/StopNode 2.36
340 TestMultiNode/serial/StartAfterStop 8.11
341 TestMultiNode/serial/RestartKeepsNodes 72.59
342 TestMultiNode/serial/DeleteNode 5.61
343 TestMultiNode/serial/StopMultiNode 24
344 TestMultiNode/serial/RestartMultiNode 52.5
345 TestMultiNode/serial/ValidateNameConflict 35.93
352 TestScheduledStopUnix 110.63
355 TestInsufficientStorage 13.06
356 TestRunningBinaryUpgrade 301.88
359 TestMissingContainerUpgrade 111.48
361 TestPause/serial/Start 65.89
362 TestPause/serial/SecondStartNoReconfiguration 31.31
364 TestStoppedBinaryUpgrade/Setup 0.85
365 TestStoppedBinaryUpgrade/Upgrade 314.79
366 TestStoppedBinaryUpgrade/MinikubeLogs 1.84
374 TestPreload/Start-NoPreload-PullImage 68.26
375 TestPreload/Restart-With-Preload-Check-User-Image 56.51
378 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
379 TestNoKubernetes/serial/StartWithK8s 36.71
387 TestNetworkPlugins/group/false 4.8
391 TestNoKubernetes/serial/StartWithStopK8s 19.01
392 TestNoKubernetes/serial/Start 8.99
393 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
394 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
395 TestNoKubernetes/serial/ProfileList 1.08
396 TestNoKubernetes/serial/Stop 1.34
397 TestNoKubernetes/serial/StartNoArgs 7.5
398 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
400 TestStartStop/group/old-k8s-version/serial/FirstStart 60.96
401 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
403 TestStartStop/group/old-k8s-version/serial/Stop 12.03
404 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
405 TestStartStop/group/old-k8s-version/serial/SecondStart 54.79
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
410 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
413 TestStartStop/group/embed-certs/serial/FirstStart 52.27
414 TestStartStop/group/embed-certs/serial/DeployApp 9.32
416 TestStartStop/group/embed-certs/serial/Stop 12.01
417 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
418 TestStartStop/group/embed-certs/serial/SecondStart 50.56
419 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
420 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
421 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
424 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.37
425 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
427 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.02
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
429 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.66
430 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
431 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
432 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
438 TestStartStop/group/no-preload/serial/Stop 1.31
439 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
441 TestStartStop/group/newest-cni/serial/DeployApp 0
443 TestStartStop/group/newest-cni/serial/Stop 1.32
444 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
447 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
448 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
449 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
451 TestPreload/PreloadSrc/gcs 6.08
452 TestPreload/PreloadSrc/github 5.74
453 TestPreload/PreloadSrc/gcs-cached 0.44
454 TestNetworkPlugins/group/auto/Start 53.51
455 TestNetworkPlugins/group/auto/KubeletFlags 0.31
456 TestNetworkPlugins/group/auto/NetCatPod 9.31
457 TestNetworkPlugins/group/auto/DNS 0.17
458 TestNetworkPlugins/group/auto/Localhost 0.13
459 TestNetworkPlugins/group/auto/HairPin 0.13
460 TestNetworkPlugins/group/kindnet/Start 84.75
461 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
462 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
463 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
464 TestNetworkPlugins/group/kindnet/DNS 0.15
465 TestNetworkPlugins/group/kindnet/Localhost 0.13
466 TestNetworkPlugins/group/kindnet/HairPin 0.14
468 TestNetworkPlugins/group/calico/Start 77.76
469 TestNetworkPlugins/group/calico/ControllerPod 6.01
470 TestNetworkPlugins/group/calico/KubeletFlags 0.31
471 TestNetworkPlugins/group/calico/NetCatPod 9.29
472 TestNetworkPlugins/group/calico/DNS 0.19
473 TestNetworkPlugins/group/calico/Localhost 0.14
474 TestNetworkPlugins/group/calico/HairPin 0.13
475 TestNetworkPlugins/group/custom-flannel/Start 61.22
476 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
477 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
478 TestNetworkPlugins/group/custom-flannel/DNS 0.16
479 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
480 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
481 TestNetworkPlugins/group/enable-default-cni/Start 72.87
482 TestNetworkPlugins/group/flannel/Start 53.01
483 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
484 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.38
485 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
486 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
487 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
488 TestNetworkPlugins/group/flannel/ControllerPod 6.01
489 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
490 TestNetworkPlugins/group/flannel/NetCatPod 12.35
491 TestNetworkPlugins/group/bridge/Start 52.95
492 TestNetworkPlugins/group/flannel/DNS 0.16
493 TestNetworkPlugins/group/flannel/Localhost 0.15
494 TestNetworkPlugins/group/flannel/HairPin 0.17
495 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
496 TestNetworkPlugins/group/bridge/NetCatPod 9.25
497 TestNetworkPlugins/group/bridge/DNS 0.16
498 TestNetworkPlugins/group/bridge/Localhost 0.13
499 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-000641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-000641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.231669228s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1222 00:09:16.682132 1881219 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1222 00:09:16.682202 1881219 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-000641
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-000641: exit status 85 (93.102331ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-000641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-000641 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:09:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:09:10.494263 1881224 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:09:10.494372 1881224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:10.494413 1881224 out.go:374] Setting ErrFile to fd 2...
	I1222 00:09:10.494425 1881224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:10.494672 1881224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	W1222 00:09:10.494802 1881224 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22179-1874095/.minikube/config/config.json: open /home/jenkins/minikube-integration/22179-1874095/.minikube/config/config.json: no such file or directory
	I1222 00:09:10.495190 1881224 out.go:368] Setting JSON to true
	I1222 00:09:10.495994 1881224 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":111100,"bootTime":1766251051,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:09:10.496061 1881224 start.go:143] virtualization:  
	I1222 00:09:10.502111 1881224 out.go:99] [download-only-000641] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1222 00:09:10.502325 1881224 preload.go:369] Failed to list preload files: open /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball: no such file or directory
	I1222 00:09:10.502399 1881224 notify.go:221] Checking for updates...
	I1222 00:09:10.505826 1881224 out.go:171] MINIKUBE_LOCATION=22179
	I1222 00:09:10.509133 1881224 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:09:10.512298 1881224 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:09:10.515561 1881224 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:09:10.518809 1881224 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1222 00:09:10.524796 1881224 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 00:09:10.525060 1881224 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:09:10.547193 1881224 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:09:10.547299 1881224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:10.609701 1881224 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-22 00:09:10.600958272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:10.609800 1881224 docker.go:319] overlay module found
	I1222 00:09:10.612840 1881224 out.go:99] Using the docker driver based on user configuration
	I1222 00:09:10.612886 1881224 start.go:309] selected driver: docker
	I1222 00:09:10.612898 1881224 start.go:928] validating driver "docker" against <nil>
	I1222 00:09:10.612992 1881224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:10.666794 1881224 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-22 00:09:10.658180153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:10.666935 1881224 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:09:10.667195 1881224 start_flags.go:413] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1222 00:09:10.667343 1881224 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 00:09:10.670478 1881224 out.go:171] Using Docker driver with root privileges
	I1222 00:09:10.673513 1881224 cni.go:84] Creating CNI manager for ""
	I1222 00:09:10.673580 1881224 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:09:10.673592 1881224 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 00:09:10.673667 1881224 start.go:353] cluster config:
	{Name:download-only-000641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-000641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:09:10.676562 1881224 out.go:99] Starting "download-only-000641" primary control-plane node in "download-only-000641" cluster
	I1222 00:09:10.676579 1881224 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:09:10.679404 1881224 out.go:99] Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:09:10.679436 1881224 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1222 00:09:10.679599 1881224 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:09:10.698105 1881224 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:09:10.698125 1881224 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1222 00:09:10.698262 1881224 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1222 00:09:10.698370 1881224 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1222 00:09:10.737679 1881224 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1222 00:09:10.737713 1881224 cache.go:65] Caching tarball of preloaded images
	I1222 00:09:10.737884 1881224 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1222 00:09:10.741193 1881224 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1222 00:09:10.741221 1881224 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1222 00:09:10.741228 1881224 preload.go:333] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1222 00:09:10.825569 1881224 preload.go:310] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1222 00:09:10.825705 1881224 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1222 00:09:14.882256 1881224 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1222 00:09:14.882619 1881224 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/download-only-000641/config.json ...
	I1222 00:09:14.882650 1881224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/download-only-000641/config.json: {Name:mke5fbc2abf547096b859b87d1eb815ced5a5ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:14.882834 1881224 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1222 00:09:14.883017 1881224 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-000641 host does not exist
	  To start a cluster, run: "minikube start -p download-only-000641"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-000641
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (4.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-805594 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-805594 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.454820388s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (4.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1222 00:09:21.586955 1881219 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1222 00:09:21.586994 1881219 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-805594
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-805594: exit status 85 (280.058156ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-000641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-000641 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-000641                                                                                                                                                   │ download-only-000641 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ start   │ -o=json --download-only -p download-only-805594 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-805594 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:09:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:09:17.173254 1881426 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:09:17.173408 1881426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:17.173425 1881426 out.go:374] Setting ErrFile to fd 2...
	I1222 00:09:17.173431 1881426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:17.173687 1881426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:09:17.174085 1881426 out.go:368] Setting JSON to true
	I1222 00:09:17.174897 1881426 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":111107,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:09:17.174963 1881426 start.go:143] virtualization:  
	I1222 00:09:17.178360 1881426 out.go:99] [download-only-805594] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:09:17.178652 1881426 notify.go:221] Checking for updates...
	I1222 00:09:17.181824 1881426 out.go:171] MINIKUBE_LOCATION=22179
	I1222 00:09:17.184811 1881426 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:09:17.187668 1881426 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:09:17.190595 1881426 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:09:17.193515 1881426 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1222 00:09:17.199262 1881426 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 00:09:17.199540 1881426 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:09:17.218849 1881426 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:09:17.218949 1881426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:17.284013 1881426 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-22 00:09:17.275179373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:17.284121 1881426 docker.go:319] overlay module found
	I1222 00:09:17.287120 1881426 out.go:99] Using the docker driver based on user configuration
	I1222 00:09:17.287155 1881426 start.go:309] selected driver: docker
	I1222 00:09:17.287168 1881426 start.go:928] validating driver "docker" against <nil>
	I1222 00:09:17.287284 1881426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:17.341151 1881426 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-22 00:09:17.332172066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:17.341301 1881426 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:09:17.341653 1881426 start_flags.go:413] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1222 00:09:17.341807 1881426 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 00:09:17.344891 1881426 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-805594 host does not exist
	  To start a cluster, run: "minikube start -p download-only-805594"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-805594
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (5.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-304005 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-304005 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.436315036s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (5.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1222 00:09:27.937984 1881219 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1222 00:09:27.938021 1881219 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-304005
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-304005: exit status 85 (93.701457ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-000641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-000641 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-000641                                                                                                                                                        │ download-only-000641 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ start   │ -o=json --download-only -p download-only-805594 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-805594 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ delete  │ -p download-only-805594                                                                                                                                                        │ download-only-805594 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │ 22 Dec 25 00:09 UTC │
	│ start   │ -o=json --download-only -p download-only-304005 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-304005 │ jenkins │ v1.37.0 │ 22 Dec 25 00:09 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:09:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:09:22.547146 1881630 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:09:22.547262 1881630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:22.547304 1881630 out.go:374] Setting ErrFile to fd 2...
	I1222 00:09:22.547314 1881630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:09:22.547568 1881630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:09:22.547951 1881630 out.go:368] Setting JSON to true
	I1222 00:09:22.548753 1881630 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":111112,"bootTime":1766251051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:09:22.548816 1881630 start.go:143] virtualization:  
	I1222 00:09:22.575349 1881630 out.go:99] [download-only-304005] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:09:22.575564 1881630 notify.go:221] Checking for updates...
	I1222 00:09:22.607910 1881630 out.go:171] MINIKUBE_LOCATION=22179
	I1222 00:09:22.640520 1881630 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:09:22.672225 1881630 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:09:22.704490 1881630 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:09:22.734898 1881630 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1222 00:09:22.799586 1881630 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 00:09:22.799880 1881630 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:09:22.820717 1881630 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:09:22.820833 1881630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:22.871991 1881630 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-22 00:09:22.863268699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:22.872099 1881630 docker.go:319] overlay module found
	I1222 00:09:22.875679 1881630 out.go:99] Using the docker driver based on user configuration
	I1222 00:09:22.875713 1881630 start.go:309] selected driver: docker
	I1222 00:09:22.875727 1881630 start.go:928] validating driver "docker" against <nil>
	I1222 00:09:22.875833 1881630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:09:22.933244 1881630 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-22 00:09:22.924457479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:09:22.933445 1881630 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:09:22.933700 1881630 start_flags.go:413] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1222 00:09:22.933844 1881630 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 00:09:22.937988 1881630 out.go:171] Using Docker driver with root privileges
	I1222 00:09:22.941738 1881630 cni.go:84] Creating CNI manager for ""
	I1222 00:09:22.941814 1881630 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1222 00:09:22.941827 1881630 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 00:09:22.941904 1881630 start.go:353] cluster config:
	{Name:download-only-304005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-304005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:09:22.945591 1881630 out.go:99] Starting "download-only-304005" primary control-plane node in "download-only-304005" cluster
	I1222 00:09:22.945608 1881630 cache.go:134] Beginning downloading kic base image for docker with crio
	I1222 00:09:22.949147 1881630 out.go:99] Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:09:22.949192 1881630 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:09:22.949390 1881630 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:09:22.967258 1881630 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:09:22.967281 1881630 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1222 00:09:22.967353 1881630 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1222 00:09:22.967376 1881630 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory, skipping pull
	I1222 00:09:22.967385 1881630 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in cache, skipping pull
	I1222 00:09:22.967392 1881630 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 as a tarball
	I1222 00:09:23.000362 1881630 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:09:23.000390 1881630 cache.go:65] Caching tarball of preloaded images
	I1222 00:09:23.000584 1881630 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:09:23.005377 1881630 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1222 00:09:23.005418 1881630 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:09:23.005426 1881630 preload.go:333] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1222 00:09:23.088054 1881630 preload.go:310] Got checksum from GCS API "efae947990a69f0349b1b3fdbfa98de4"
	I1222 00:09:23.088113 1881630 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:efae947990a69f0349b1b3fdbfa98de4 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1222 00:09:26.848028 1881630 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1222 00:09:26.848401 1881630 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/download-only-304005/config.json ...
	I1222 00:09:26.848433 1881630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/download-only-304005/config.json: {Name:mkd2f18018c83efb6881654625534b912f1f05bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:09:26.848646 1881630 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1222 00:09:26.848810 1881630 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl
	
	
	* The control-plane node download-only-304005 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304005"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-304005
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1222 00:09:29.221274 1881219 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-854679 --alsologtostderr --binary-mirror http://127.0.0.1:46683 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-854679" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-854679
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-568394
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-568394: exit status 85 (97.995824ms)

                                                
                                                
-- stdout --
	* Profile "addons-568394" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-568394"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-568394
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-568394: exit status 85 (100.386804ms)

                                                
                                                
-- stdout --
	* Profile "addons-568394" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-568394"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (114.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-568394 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-568394 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m54.786439869s)
--- PASS: TestAddons/Setup (114.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-568394 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-568394 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-568394 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-568394 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [112ce16b-8ebc-4364-9aa5-006a80b00eb5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [112ce16b-8ebc-4364-9aa5-006a80b00eb5] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003348085s
addons_test.go:696: (dbg) Run:  kubectl --context addons-568394 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-568394 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-568394 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-568394 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-568394
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-568394: (12.10128329s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-568394
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-568394
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-568394
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (37.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-433911 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1222 01:30:47.971201 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-433911 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.370544305s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-433911 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-433911 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-433911 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-433911" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-433911
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-433911: (2.406872791s)
--- PASS: TestCertOptions (37.50s)

                                                
                                    
x
+
TestCertExpiration (240.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-031738 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1222 01:29:06.888699 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-031738 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.998414206s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1222 01:32:43.835660 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-031738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.156344602s)
helpers_test.go:176: Cleaning up "cert-expiration-031738" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-031738
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-031738: (2.724458952s)
--- PASS: TestCertExpiration (240.88s)

                                                
                                    
x
+
TestForceSystemdFlag (33.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-314223 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-314223 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.877419726s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-314223 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-314223" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-314223
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-314223: (2.524915879s)
--- PASS: TestForceSystemdFlag (33.73s)

                                                
                                    
x
+
TestForceSystemdEnv (35.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-464544 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-464544 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.453040096s)
helpers_test.go:176: Cleaning up "force-systemd-env-464544" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-464544
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-464544: (2.735566979s)
--- PASS: TestForceSystemdEnv (35.19s)

                                                
                                    
x
+
TestErrorSpam/setup (31.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-252338 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-252338 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-252338 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-252338 --driver=docker  --container-runtime=crio: (31.58710661s)
--- PASS: TestErrorSpam/setup (31.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (6.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause: exit status 80 (1.85449626s)

                                                
                                                
-- stdout --
	* Pausing node nospam-252338 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:15:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_8.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause: exit status 80 (2.213687419s)

                                                
                                                
-- stdout --
	* Pausing node nospam-252338 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:15:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_8.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause: exit status 80 (2.068664417s)

                                                
                                                
-- stdout --
	* Pausing node nospam-252338 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:15:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_8.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.05s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause: exit status 80 (2.208406116s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-252338 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:15:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_8.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause: exit status 80 (1.720619101s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-252338 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:15:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_8.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause: exit status 80 (2.12092534s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-252338 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T00:15:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_8.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.05s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 stop: (1.318514747s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-252338 --log_dir /tmp/nospam-252338 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885873 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1222 00:16:25.768389 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:25.773781 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:25.784091 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:25.804384 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:25.844676 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:25.924995 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:26.085467 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:26.405680 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:27.046660 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:28.326950 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-885873 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (51.073674636s)
--- PASS: TestFunctional/serial/StartWithProxy (51.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1222 00:16:28.656014 1881219 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885873 --alsologtostderr -v=8
E1222 00:16:30.887177 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:36.007683 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:46.247928 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-885873 --alsologtostderr -v=8: (29.432646552s)
functional_test.go:678: soft start took 29.438219402s for "functional-885873" cluster.
I1222 00:16:58.088986 1881219 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (29.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-885873 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-885873 cache add registry.k8s.io/pause:3.1: (1.183215885s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-885873 cache add registry.k8s.io/pause:3.3: (1.339698348s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-885873 cache add registry.k8s.io/pause:latest: (1.337217213s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-885873 /tmp/TestFunctionalserialCacheCmdcacheadd_local2012617343/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cache add minikube-local-cache-test:functional-885873
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cache delete minikube-local-cache-test:functional-885873
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-885873
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.640346ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 kubectl -- --context functional-885873 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-885873 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (28.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885873 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1222 00:17:06.728419 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-885873 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.393249648s)
functional_test.go:776: restart took 28.393352184s for "functional-885873" cluster.
I1222 00:17:34.660038 1881219 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (28.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-885873 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-885873 logs: (1.458473812s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 logs --file /tmp/TestFunctionalserialLogsFileCmd1735541317/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-885873 logs --file /tmp/TestFunctionalserialLogsFileCmd1735541317/001/logs.txt: (1.485218596s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-885873 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-885873
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-885873: exit status 115 (386.142472ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32749 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-885873 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 config get cpus: exit status 14 (107.446882ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 config get cpus: exit status 14 (78.122577ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-885873 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-885873 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1907173: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885873 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-885873 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (209.604453ms)

                                                
                                                
-- stdout --
	* [functional-885873] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:18:12.616373 1906385 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:18:12.616550 1906385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:18:12.616586 1906385 out.go:374] Setting ErrFile to fd 2...
	I1222 00:18:12.616616 1906385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:18:12.616972 1906385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:18:12.617462 1906385 out.go:368] Setting JSON to false
	I1222 00:18:12.618840 1906385 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":111642,"bootTime":1766251051,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:18:12.618946 1906385 start.go:143] virtualization:  
	I1222 00:18:12.622063 1906385 out.go:179] * [functional-885873] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:18:12.625180 1906385 notify.go:221] Checking for updates...
	I1222 00:18:12.625688 1906385 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:18:12.628614 1906385 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:18:12.631527 1906385 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:18:12.634411 1906385 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:18:12.637285 1906385 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:18:12.640238 1906385 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:18:12.643658 1906385 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:18:12.644268 1906385 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:18:12.680036 1906385 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:18:12.680165 1906385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:18:12.758384 1906385 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 00:18:12.748334957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:18:12.758480 1906385 docker.go:319] overlay module found
	I1222 00:18:12.761564 1906385 out.go:179] * Using the docker driver based on existing profile
	I1222 00:18:12.764380 1906385 start.go:309] selected driver: docker
	I1222 00:18:12.764397 1906385 start.go:928] validating driver "docker" against &{Name:functional-885873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-885873 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:18:12.764487 1906385 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:18:12.769130 1906385 out.go:203] 
	W1222 00:18:12.772125 1906385 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1222 00:18:12.774939 1906385 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885873 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885873 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-885873 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (206.298146ms)

                                                
                                                
-- stdout --
	* [functional-885873] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:18:12.417772 1906339 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:18:12.417885 1906339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:18:12.417899 1906339 out.go:374] Setting ErrFile to fd 2...
	I1222 00:18:12.417904 1906339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:18:12.419488 1906339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:18:12.419945 1906339 out.go:368] Setting JSON to false
	I1222 00:18:12.422279 1906339 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":111642,"bootTime":1766251051,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:18:12.422408 1906339 start.go:143] virtualization:  
	I1222 00:18:12.426059 1906339 out.go:179] * [functional-885873] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1222 00:18:12.429875 1906339 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:18:12.429915 1906339 notify.go:221] Checking for updates...
	I1222 00:18:12.432798 1906339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:18:12.435830 1906339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:18:12.438897 1906339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:18:12.441939 1906339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:18:12.444814 1906339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:18:12.448100 1906339 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:18:12.448679 1906339 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:18:12.473510 1906339 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:18:12.473641 1906339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:18:12.546981 1906339 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 00:18:12.536617081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:18:12.547098 1906339 docker.go:319] overlay module found
	I1222 00:18:12.550231 1906339 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1222 00:18:12.553009 1906339 start.go:309] selected driver: docker
	I1222 00:18:12.553031 1906339 start.go:928] validating driver "docker" against &{Name:functional-885873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-885873 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:18:12.553142 1906339 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:18:12.556809 1906339 out.go:203] 
	W1222 00:18:12.559800 1906339 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1222 00:18:12.562771 1906339 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-885873 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-885873 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-gwqql" [87de190d-c5b6-413c-b94e-17203077378c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-gwqql" [87de190d-c5b6-413c-b94e-17203077378c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003296574s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31585
functional_test.go:1680: http://192.168.49.2:31585: success! body:
Request served by hello-node-connect-7d85dfc575-gwqql

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31585
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (18.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [198c859f-dcd4-4176-a4df-f9428179cb94] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00319644s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-885873 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-885873 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-885873 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-885873 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e04cfdeb-0335-486b-9d20-c2d4818f3217] Pending
helpers_test.go:353: "sp-pod" [e04cfdeb-0335-486b-9d20-c2d4818f3217] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003470546s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-885873 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-885873 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-885873 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [9a7e6b38-ff20-488d-a3d6-66a1121d74d5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [9a7e6b38-ff20-488d-a3d6-66a1121d74d5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003831485s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-885873 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (18.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh -n functional-885873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cp functional-885873:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2364764116/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh -n functional-885873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh -n functional-885873 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1881219/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo cat /etc/test/nested/copy/1881219/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1881219.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo cat /etc/ssl/certs/1881219.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1881219.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo cat /usr/share/ca-certificates/1881219.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo cat /etc/ssl/certs/51391683.0"
2025/12/22 00:18:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2004: Checking for existence of /etc/ssl/certs/18812192.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo cat /etc/ssl/certs/18812192.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/18812192.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo cat /usr/share/ca-certificates/18812192.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-885873 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 ssh "sudo systemctl is-active docker": exit status 1 (375.465801ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 ssh "sudo systemctl is-active containerd": exit status 1 (349.222638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-885873 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-885873 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-885873 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-885873 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1903825: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-885873 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-885873 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [e8b4e69b-846c-42fc-9bda-fd397f2ccdbd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [e8b4e69b-846c-42fc-9bda-fd397f2ccdbd] Running
E1222 00:17:47.688615 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003226563s
I1222 00:17:52.234710 1881219 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-885873 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.136.175 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-885873 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-885873 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-885873 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-c7dzz" [26e1e3f5-b3eb-4be8-8549-e40c2dfa3e23] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-c7dzz" [26e1e3f5-b3eb-4be8-8549-e40c2dfa3e23] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004394517s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "376.697524ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "60.440447ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "378.013112ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.4969ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdany-port3183464975/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766362684344918694" to /tmp/TestFunctionalparallelMountCmdany-port3183464975/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766362684344918694" to /tmp/TestFunctionalparallelMountCmdany-port3183464975/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766362684344918694" to /tmp/TestFunctionalparallelMountCmdany-port3183464975/001/test-1766362684344918694
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.181954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 00:18:04.705692 1881219 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 22 00:18 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 22 00:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 22 00:18 test-1766362684344918694
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh cat /mount-9p/test-1766362684344918694
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-885873 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [1843d858-ba40-452f-87f2-a16994d56b5a] Pending
helpers_test.go:353: "busybox-mount" [1843d858-ba40-452f-87f2-a16994d56b5a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [1843d858-ba40-452f-87f2-a16994d56b5a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [1843d858-ba40-452f-87f2-a16994d56b5a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003668917s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-885873 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdany-port3183464975/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 service list -o json
functional_test.go:1504: Took "524.761876ms" to run "out/minikube-linux-arm64 -p functional-885873 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31185
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31185
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdspecific-port328828977/001:/mount-9p --alsologtostderr -v=1 --port 34303]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (546.669388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 00:18:12.235850 1881219 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdspecific-port328828977/001:/mount-9p --alsologtostderr -v=1 --port 34303] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 ssh "sudo umount -f /mount-9p": exit status 1 (342.964781ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-885873 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdspecific-port328828977/001:/mount-9p --alsologtostderr -v=1 --port 34303] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3353278550/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3353278550/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3353278550/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-885873 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3353278550/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3353278550/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3353278550/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885873 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-885873
localhost/kicbase/echo-server:functional-885873
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885873 image ls --format short --alsologtostderr:
I1222 00:18:27.819325 1908693 out.go:360] Setting OutFile to fd 1 ...
I1222 00:18:27.819562 1908693 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:27.819594 1908693 out.go:374] Setting ErrFile to fd 2...
I1222 00:18:27.819615 1908693 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:27.819890 1908693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:18:27.820500 1908693 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:27.820659 1908693 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:27.821226 1908693 cli_runner.go:164] Run: docker container inspect functional-885873 --format={{.State.Status}}
I1222 00:18:27.848134 1908693 ssh_runner.go:195] Run: systemctl --version
I1222 00:18:27.848191 1908693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885873
I1222 00:18:27.878184 1908693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38466 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-885873/id_rsa Username:docker}
I1222 00:18:27.984962 1908693 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885873 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ 962dbbc0e55ec │ 55.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3                               │ 7ada8ff13e54b │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ docker.io/kicbase/echo-server           │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-885873                     │ ce2d2cda2d858 │ 4.79MB │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ localhost/minikube-local-cache-test     │ functional-885873                     │ 5284a74c87da2 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1                               │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3                               │ 4461daf6b6af8 │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0                               │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3                               │ 2f2aa21d34d2d │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.3                               │ cf65ae6c8f700 │ 84.8MB │
│ registry.k8s.io/pause                   │ latest                                │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885873 image ls --format table --alsologtostderr:
I1222 00:18:28.449451 1908878 out.go:360] Setting OutFile to fd 1 ...
I1222 00:18:28.449637 1908878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:28.449656 1908878 out.go:374] Setting ErrFile to fd 2...
I1222 00:18:28.449677 1908878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:28.449968 1908878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:18:28.450612 1908878 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:28.450777 1908878 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:28.451348 1908878 cli_runner.go:164] Run: docker container inspect functional-885873 --format={{.State.Status}}
I1222 00:18:28.479063 1908878 ssh_runner.go:195] Run: systemctl --version
I1222 00:18:28.479120 1908878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885873
I1222 00:18:28.500282 1908878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38466 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-885873/id_rsa Username:docker}
I1222 00:18:28.600334 1908878 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885873 image ls --format json --alsologtostderr:
[{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:49437795b4edd6ed8ada141b20cf576fb0aa4e84b82d6a25af841ed293abece1","registry.k8s.io/kube-controller-manager@sha256:716a210
d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"72629077"},{"id":"4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162","repoDigests":["registry.k8s.io/kube-proxy@sha256:5c52b97ed657a0a1ef3c24e25d953fcca37fa200f3ec98938c254d748008dd86","registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"75941783"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b90
8eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896","repoDigests":["registry.k8s
.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:6fa1e54cee33473ab964d87ea870ccf4ac9e6e4012b6d73160fcc3a99c7be9b5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"84818927"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f
6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"5284a74c87da22d163f760a326c4966bccb011bdce6d9ae20b1837b332f05716","repoDigests":["localhost/minikube-local-cache-test@sha256:e77c7cac2e885512094ccb9c31eddb94717b6c2a321dbd09b136b96dfc844847"],"repoTags":["localhost/minikube-local-cache-test:functional-885873"],"size":"3330"},{"id":"962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077764"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5
898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:7f3d992e0f2cb23d075ddafc8c73b5bdcf0ebc01098ef92965cc371eabcb9611","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"51592021"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":
"519884"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-885873"],"size":"4789170"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885873 image ls --format json --alsologtostderr:
I1222 00:18:28.175764 1908784 out.go:360] Setting OutFile to fd 1 ...
I1222 00:18:28.175938 1908784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:28.175970 1908784 out.go:374] Setting ErrFile to fd 2...
I1222 00:18:28.175990 1908784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:28.176360 1908784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:18:28.177349 1908784 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:28.177611 1908784 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:28.178290 1908784 cli_runner.go:164] Run: docker container inspect functional-885873 --format={{.State.Status}}
I1222 00:18:28.195726 1908784 ssh_runner.go:195] Run: systemctl --version
I1222 00:18:28.195779 1908784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885873
I1222 00:18:28.234505 1908784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38466 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-885873/id_rsa Username:docker}
I1222 00:18:28.337405 1908784 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885873 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-885873
size: "4789170"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:6fa1e54cee33473ab964d87ea870ccf4ac9e6e4012b6d73160fcc3a99c7be9b5
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "84818927"
- id: 7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:49437795b4edd6ed8ada141b20cf576fb0aa4e84b82d6a25af841ed293abece1
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "72629077"
- id: 2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:7f3d992e0f2cb23d075ddafc8c73b5bdcf0ebc01098ef92965cc371eabcb9611
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "51592021"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 5284a74c87da22d163f760a326c4966bccb011bdce6d9ae20b1837b332f05716
repoDigests:
- localhost/minikube-local-cache-test@sha256:e77c7cac2e885512094ccb9c31eddb94717b6c2a321dbd09b136b96dfc844847
repoTags:
- localhost/minikube-local-cache-test:functional-885873
size: "3330"
- id: 962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077764"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162
repoDigests:
- registry.k8s.io/kube-proxy@sha256:5c52b97ed657a0a1ef3c24e25d953fcca37fa200f3ec98938c254d748008dd86
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "75941783"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885873 image ls --format yaml --alsologtostderr:
I1222 00:18:27.910072 1908713 out.go:360] Setting OutFile to fd 1 ...
I1222 00:18:27.910252 1908713 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:27.910284 1908713 out.go:374] Setting ErrFile to fd 2...
I1222 00:18:27.910303 1908713 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:27.910570 1908713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:18:27.911228 1908713 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:27.911402 1908713 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:27.911922 1908713 cli_runner.go:164] Run: docker container inspect functional-885873 --format={{.State.Status}}
I1222 00:18:27.931509 1908713 ssh_runner.go:195] Run: systemctl --version
I1222 00:18:27.931558 1908713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885873
I1222 00:18:27.952460 1908713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38466 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-885873/id_rsa Username:docker}
I1222 00:18:28.055702 1908713 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885873 ssh pgrep buildkitd: exit status 1 (367.989349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr: (3.501377457s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c29eec09258
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-885873
--> 4edd1d7665c
Successfully tagged localhost/my-image:functional-885873
4edd1d7665c939b4bbe2ff3eebb864585e01c83ef27bf473125c3b92206b488b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885873 image build -t localhost/my-image:functional-885873 testdata/build --alsologtostderr:
I1222 00:18:28.454366 1908884 out.go:360] Setting OutFile to fd 1 ...
I1222 00:18:28.455774 1908884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:28.455817 1908884 out.go:374] Setting ErrFile to fd 2...
I1222 00:18:28.455838 1908884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:18:28.456116 1908884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:18:28.456790 1908884 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:28.458042 1908884 config.go:182] Loaded profile config "functional-885873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1222 00:18:28.458592 1908884 cli_runner.go:164] Run: docker container inspect functional-885873 --format={{.State.Status}}
I1222 00:18:28.479831 1908884 ssh_runner.go:195] Run: systemctl --version
I1222 00:18:28.479881 1908884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885873
I1222 00:18:28.505489 1908884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38466 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-885873/id_rsa Username:docker}
I1222 00:18:28.620372 1908884 build_images.go:162] Building image from path: /tmp/build.2098007231.tar
I1222 00:18:28.620449 1908884 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1222 00:18:28.630103 1908884 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2098007231.tar
I1222 00:18:28.643542 1908884 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2098007231.tar: stat -c "%s %y" /var/lib/minikube/build/build.2098007231.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2098007231.tar': No such file or directory
I1222 00:18:28.643576 1908884 ssh_runner.go:362] scp /tmp/build.2098007231.tar --> /var/lib/minikube/build/build.2098007231.tar (3072 bytes)
I1222 00:18:28.692986 1908884 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2098007231
I1222 00:18:28.711855 1908884 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2098007231 -xf /var/lib/minikube/build/build.2098007231.tar
I1222 00:18:28.722425 1908884 crio.go:315] Building image: /var/lib/minikube/build/build.2098007231
I1222 00:18:28.722489 1908884 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-885873 /var/lib/minikube/build/build.2098007231 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1222 00:18:31.871239 1908884 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-885873 /var/lib/minikube/build/build.2098007231 --cgroup-manager=cgroupfs: (3.148727667s)
I1222 00:18:31.871318 1908884 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2098007231
I1222 00:18:31.879304 1908884 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2098007231.tar
I1222 00:18:31.887293 1908884 build_images.go:218] Built localhost/my-image:functional-885873 from /tmp/build.2098007231.tar
I1222 00:18:31.887319 1908884 build_images.go:134] succeeded building to: functional-885873
I1222 00:18:31.887325 1908884 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-885873
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image load --daemon kicbase/echo-server:functional-885873 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-885873 image load --daemon kicbase/echo-server:functional-885873 --alsologtostderr: (3.356071386s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image load --daemon kicbase/echo-server:functional-885873 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-885873 image load --daemon kicbase/echo-server:functional-885873 --alsologtostderr: (1.067817287s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-885873
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image load --daemon kicbase/echo-server:functional-885873 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image save kicbase/echo-server:functional-885873 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image rm kicbase/echo-server:functional-885873 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-885873
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 image save --daemon kicbase/echo-server:functional-885873 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-885873
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-885873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-885873
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-885873
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-885873
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22179-1874095/.minikube/files/etc/test/nested/copy/1881219/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 cache add registry.k8s.io/pause:3.1: (1.174367619s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 cache add registry.k8s.io/pause:3.3: (1.13702354s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 cache add registry.k8s.io/pause:latest: (1.126610477s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2769873934/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cache add minikube-local-cache-test:functional-837733
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cache delete minikube-local-cache-test:functional-837733
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-837733
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.798841ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi4071649792/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 config get cpus: exit status 14 (72.171659ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 config get cpus: exit status 14 (68.98252ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-837733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-837733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (198.265785ms)

                                                
                                                
-- stdout --
	* [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:47:59.715435 1940163 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:47:59.715616 1940163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:47:59.715633 1940163 out.go:374] Setting ErrFile to fd 2...
	I1222 00:47:59.715642 1940163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:47:59.715897 1940163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:47:59.716271 1940163 out.go:368] Setting JSON to false
	I1222 00:47:59.717182 1940163 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":113429,"bootTime":1766251051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:47:59.717257 1940163 start.go:143] virtualization:  
	I1222 00:47:59.721150 1940163 out.go:179] * [functional-837733] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:47:59.724927 1940163 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:47:59.725001 1940163 notify.go:221] Checking for updates...
	I1222 00:47:59.727963 1940163 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:47:59.730942 1940163 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:47:59.734043 1940163 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:47:59.737251 1940163 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:47:59.740082 1940163 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:47:59.743428 1940163 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:47:59.744076 1940163 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:47:59.775750 1940163 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:47:59.775878 1940163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:47:59.844496 1940163 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:47:59.833435251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:47:59.844593 1940163 docker.go:319] overlay module found
	I1222 00:47:59.849403 1940163 out.go:179] * Using the docker driver based on existing profile
	I1222 00:47:59.852221 1940163 start.go:309] selected driver: docker
	I1222 00:47:59.852240 1940163 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:47:59.852388 1940163 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:47:59.855814 1940163 out.go:203] 
	W1222 00:47:59.858709 1940163 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1222 00:47:59.861705 1940163 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-837733 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-837733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-837733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (222.463582ms)

                                                
                                                
-- stdout --
	* [functional-837733] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:48:00.432356 1940285 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:48:00.432506 1940285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:48:00.432512 1940285 out.go:374] Setting ErrFile to fd 2...
	I1222 00:48:00.432518 1940285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:48:00.433011 1940285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:48:00.433541 1940285 out.go:368] Setting JSON to false
	I1222 00:48:00.434510 1940285 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":113430,"bootTime":1766251051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 00:48:00.434603 1940285 start.go:143] virtualization:  
	I1222 00:48:00.438306 1940285 out.go:179] * [functional-837733] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1222 00:48:00.441464 1940285 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:48:00.441573 1940285 notify.go:221] Checking for updates...
	I1222 00:48:00.447466 1940285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:48:00.450435 1940285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 00:48:00.453538 1940285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 00:48:00.456664 1940285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:48:00.460323 1940285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:48:00.463901 1940285 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1222 00:48:00.464651 1940285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:48:00.501637 1940285 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:48:00.501792 1940285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:48:00.569175 1940285 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:48:00.558846475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:48:00.569280 1940285 docker.go:319] overlay module found
	I1222 00:48:00.572557 1940285 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1222 00:48:00.575496 1940285 start.go:309] selected driver: docker
	I1222 00:48:00.575517 1940285 start.go:928] validating driver "docker" against &{Name:functional-837733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-837733 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:48:00.575624 1940285 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:48:00.579072 1940285 out.go:203] 
	W1222 00:48:00.581978 1940285 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1222 00:48:00.584851 1940285 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh -n functional-837733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cp functional-837733:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm866299650/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh -n functional-837733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh -n functional-837733 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1881219/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo cat /etc/test/nested/copy/1881219/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1881219.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo cat /etc/ssl/certs/1881219.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1881219.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo cat /usr/share/ca-certificates/1881219.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/18812192.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo cat /etc/ssl/certs/18812192.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/18812192.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo cat /usr/share/ca-certificates/18812192.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 ssh "sudo systemctl is-active docker": exit status 1 (402.216267ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 ssh "sudo systemctl is-active containerd": exit status 1 (332.845041ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-837733 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-837733
localhost/kicbase/echo-server:functional-837733
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-837733 image ls --format short --alsologtostderr:
I1222 00:48:03.426226 1940942 out.go:360] Setting OutFile to fd 1 ...
I1222 00:48:03.426428 1940942 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:03.426462 1940942 out.go:374] Setting ErrFile to fd 2...
I1222 00:48:03.426484 1940942 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:03.426872 1940942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:48:03.428074 1940942 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:03.428266 1940942 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:03.428820 1940942 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
I1222 00:48:03.446761 1940942 ssh_runner.go:195] Run: systemctl --version
I1222 00:48:03.446828 1940942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
I1222 00:48:03.463891 1940942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
I1222 00:48:03.561051 1940942 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-837733 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 3c6ba27e07aef │ 85MB   │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ 7e3acea3d87aa │ 74.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ a34b3483f25ba │ 72.2MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ localhost/kicbase/echo-server           │ functional-837733  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-837733  │ 5284a74c87da2 │ 3.33kB │
│ localhost/my-image                      │ functional-837733  │ c39130bd4d636 │ 1.64MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 271e49a0ebc56 │ 60.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ abca4d5226620 │ 49.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-837733 image ls --format table --alsologtostderr:
I1222 00:48:07.719610 1941437 out.go:360] Setting OutFile to fd 1 ...
I1222 00:48:07.719800 1941437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:07.719829 1941437 out.go:374] Setting ErrFile to fd 2...
I1222 00:48:07.719842 1941437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:07.720160 1941437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:48:07.720784 1941437 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:07.720914 1941437 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:07.721442 1941437 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
I1222 00:48:07.738747 1941437 ssh_runner.go:195] Run: systemctl --version
I1222 00:48:07.738804 1941437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
I1222 00:48:07.755664 1941437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
I1222 00:48:07.847899 1941437 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-837733 image ls --format json --alsologtostderr:
[{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60850387"},{"id":"7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e","repoDigests":["registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"74107287"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.i
o/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"231a793a71043230511f32b4ed4a1bb40f9c100237e7eeb3032032b513741da2","repoDigests":["docker.io/library/936be4bef5dc351528fbe3c3f2b25cecdba434befaa12ba89ad16784f14b67b2-tmp@sha256:214e5f214994a19eba0da626737a53660f756f4cb23ba3c6147f210e0c065792"],"repoTags":[],"size":"1638179"},{"id":"c39130bd4d636529c4569c08a2fe7fe3ccb0f7639cb555eb57c2385ad675bc3d","repoDigests":["localhost/my-image@sha256:c9a5d0d96471540011f07e4620671c8c4b0a92c01f679c9886835b8fa15b59d5"],"repoTags":["localhost/my-image:functional-837733"],"size":"1640791"},{"id":"a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a","repoDigest
s":["registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f","registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"72170325"},{"id":"abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3","registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigest
s":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"5284a74c87da22d163f760a326c4966bccb011bdce6d9ae20b1837b332f05716","repoDigests":["localhost/minikube-local-cache-test@sha256:e77c7cac2e885512094ccb9c31eddb94717b6c2a321dbd09b136b96dfc844847"],"repoTags":["localhost/minikube-local-cache-test:functional-837733"],"size":"3330"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35
fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54","repoDigests":["registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee","registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"85015535"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/bus
ybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-837733"],"size":"4788229"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-837733 image ls --format json --alsologtostderr:
I1222 00:48:07.502785 1941399 out.go:360] Setting OutFile to fd 1 ...
I1222 00:48:07.502962 1941399 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:07.503005 1941399 out.go:374] Setting ErrFile to fd 2...
I1222 00:48:07.503027 1941399 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:07.503313 1941399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:48:07.503944 1941399 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:07.504122 1941399 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:07.504653 1941399 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
I1222 00:48:07.521514 1941399 ssh_runner.go:195] Run: systemctl --version
I1222 00:48:07.521572 1941399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
I1222 00:48:07.538743 1941399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
I1222 00:48:07.635828 1941399 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-837733 image ls --format yaml --alsologtostderr:
- id: 5284a74c87da22d163f760a326c4966bccb011bdce6d9ae20b1837b332f05716
repoDigests:
- localhost/minikube-local-cache-test@sha256:e77c7cac2e885512094ccb9c31eddb94717b6c2a321dbd09b136b96dfc844847
repoTags:
- localhost/minikube-local-cache-test:functional-837733
size: "3330"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
- registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "85015535"
- id: abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
- registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "49822549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "72170325"
- id: 7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "74107287"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-837733
size: "4788229"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-837733 image ls --format yaml --alsologtostderr:
I1222 00:48:03.683536 1940983 out.go:360] Setting OutFile to fd 1 ...
I1222 00:48:03.683702 1940983 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:03.683717 1940983 out.go:374] Setting ErrFile to fd 2...
I1222 00:48:03.683723 1940983 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:03.684028 1940983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:48:03.684670 1940983 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:03.684814 1940983 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:03.685657 1940983 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
I1222 00:48:03.703527 1940983 ssh_runner.go:195] Run: systemctl --version
I1222 00:48:03.703576 1940983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
I1222 00:48:03.720142 1940983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
I1222 00:48:03.815882 1940983 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 ssh pgrep buildkitd: exit status 1 (263.866794ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image build -t localhost/my-image:functional-837733 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 image build -t localhost/my-image:functional-837733 testdata/build --alsologtostderr: (3.113283004s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-837733 image build -t localhost/my-image:functional-837733 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 231a793a710
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-837733
--> c39130bd4d6
Successfully tagged localhost/my-image:functional-837733
c39130bd4d636529c4569c08a2fe7fe3ccb0f7639cb555eb57c2385ad675bc3d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-837733 image build -t localhost/my-image:functional-837733 testdata/build --alsologtostderr:
I1222 00:48:04.167206 1941082 out.go:360] Setting OutFile to fd 1 ...
I1222 00:48:04.167422 1941082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:04.167456 1941082 out.go:374] Setting ErrFile to fd 2...
I1222 00:48:04.167474 1941082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:48:04.167743 1941082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
I1222 00:48:04.168360 1941082 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:04.169046 1941082 config.go:182] Loaded profile config "functional-837733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1222 00:48:04.169659 1941082 cli_runner.go:164] Run: docker container inspect functional-837733 --format={{.State.Status}}
I1222 00:48:04.187186 1941082 ssh_runner.go:195] Run: systemctl --version
I1222 00:48:04.187242 1941082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-837733
I1222 00:48:04.204178 1941082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38471 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/functional-837733/id_rsa Username:docker}
I1222 00:48:04.303808 1941082 build_images.go:162] Building image from path: /tmp/build.118825467.tar
I1222 00:48:04.303896 1941082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1222 00:48:04.311562 1941082 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.118825467.tar
I1222 00:48:04.315313 1941082 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.118825467.tar: stat -c "%s %y" /var/lib/minikube/build/build.118825467.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.118825467.tar': No such file or directory
I1222 00:48:04.315344 1941082 ssh_runner.go:362] scp /tmp/build.118825467.tar --> /var/lib/minikube/build/build.118825467.tar (3072 bytes)
I1222 00:48:04.334030 1941082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.118825467
I1222 00:48:04.345080 1941082 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.118825467 -xf /var/lib/minikube/build/build.118825467.tar
I1222 00:48:04.354204 1941082 crio.go:315] Building image: /var/lib/minikube/build/build.118825467
I1222 00:48:04.354290 1941082 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-837733 /var/lib/minikube/build/build.118825467 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1222 00:48:07.203694 1941082 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-837733 /var/lib/minikube/build/build.118825467 --cgroup-manager=cgroupfs: (2.849370393s)
I1222 00:48:07.203768 1941082 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.118825467
I1222 00:48:07.211880 1941082 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.118825467.tar
I1222 00:48:07.220418 1941082 build_images.go:218] Built localhost/my-image:functional-837733 from /tmp/build.118825467.tar
I1222 00:48:07.220446 1941082 build_images.go:134] succeeded building to: functional-837733
I1222 00:48:07.220451 1941082 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-837733
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image load --daemon kicbase/echo-server:functional-837733 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-837733 image load --daemon kicbase/echo-server:functional-837733 --alsologtostderr: (1.208132292s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image load --daemon kicbase/echo-server:functional-837733 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-837733
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image load --daemon kicbase/echo-server:functional-837733 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image save kicbase/echo-server:functional-837733 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image rm kicbase/echo-server:functional-837733 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-837733
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 image save --daemon kicbase/echo-server:functional-837733 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-837733
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-837733 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "320.309547ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "53.322158ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "341.174758ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.16609ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2852505981/001:/mount-9p --alsologtostderr -v=1 --port 33765]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.715079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2852505981/001:/mount-9p --alsologtostderr -v=1 --port 33765] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-837733 ssh "sudo umount -f /mount-9p": exit status 1 (245.23039ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-837733 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2852505981/001:/mount-9p --alsologtostderr -v=1 --port 33765] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-837733 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-837733 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-837733 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3530626045/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-837733
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-837733
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-837733
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (169.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1222 00:50:47.971620 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:47.976847 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:47.987150 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:48.007414 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:48.047654 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:48.127938 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:48.288323 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:48.608540 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:49.249005 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:50.529480 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:53.090980 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:50:58.211202 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:51:08.451421 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:51:25.759664 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:51:28.932071 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:52:09.892677 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m48.538623486s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (169.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- rollout status deployment/busybox
E1222 00:52:43.836251 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 kubectl -- rollout status deployment/busybox: (4.04461465s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-f489q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-mp7v2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-rw8dl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-f489q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-mp7v2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-rw8dl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-f489q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-mp7v2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-rw8dl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-f489q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-f489q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-mp7v2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-mp7v2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-rw8dl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 kubectl -- exec busybox-7b57f96db7-rw8dl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 node add --alsologtostderr -v 5: (30.851159003s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5: (1.104103455s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-448204 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.023584577s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 status --output json --alsologtostderr -v 5: (1.008479283s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp testdata/cp-test.txt ha-448204:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1129517852/001/cp-test_ha-448204.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204:/home/docker/cp-test.txt ha-448204-m02:/home/docker/cp-test_ha-448204_ha-448204-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m02 "sudo cat /home/docker/cp-test_ha-448204_ha-448204-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204:/home/docker/cp-test.txt ha-448204-m03:/home/docker/cp-test_ha-448204_ha-448204-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m03 "sudo cat /home/docker/cp-test_ha-448204_ha-448204-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204:/home/docker/cp-test.txt ha-448204-m04:/home/docker/cp-test_ha-448204_ha-448204-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m04 "sudo cat /home/docker/cp-test_ha-448204_ha-448204-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp testdata/cp-test.txt ha-448204-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1129517852/001/cp-test_ha-448204-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m02:/home/docker/cp-test.txt ha-448204:/home/docker/cp-test_ha-448204-m02_ha-448204.txt
E1222 00:53:31.813691 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204 "sudo cat /home/docker/cp-test_ha-448204-m02_ha-448204.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m02:/home/docker/cp-test.txt ha-448204-m03:/home/docker/cp-test_ha-448204-m02_ha-448204-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m03 "sudo cat /home/docker/cp-test_ha-448204-m02_ha-448204-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m02:/home/docker/cp-test.txt ha-448204-m04:/home/docker/cp-test_ha-448204-m02_ha-448204-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m04 "sudo cat /home/docker/cp-test_ha-448204-m02_ha-448204-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp testdata/cp-test.txt ha-448204-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1129517852/001/cp-test_ha-448204-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m03:/home/docker/cp-test.txt ha-448204:/home/docker/cp-test_ha-448204-m03_ha-448204.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204 "sudo cat /home/docker/cp-test_ha-448204-m03_ha-448204.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m03:/home/docker/cp-test.txt ha-448204-m02:/home/docker/cp-test_ha-448204-m03_ha-448204-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m02 "sudo cat /home/docker/cp-test_ha-448204-m03_ha-448204-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m03:/home/docker/cp-test.txt ha-448204-m04:/home/docker/cp-test_ha-448204-m03_ha-448204-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m04 "sudo cat /home/docker/cp-test_ha-448204-m03_ha-448204-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp testdata/cp-test.txt ha-448204-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1129517852/001/cp-test_ha-448204-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m04:/home/docker/cp-test.txt ha-448204:/home/docker/cp-test_ha-448204-m04_ha-448204.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204 "sudo cat /home/docker/cp-test_ha-448204-m04_ha-448204.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m04:/home/docker/cp-test.txt ha-448204-m02:/home/docker/cp-test_ha-448204-m04_ha-448204-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m02 "sudo cat /home/docker/cp-test_ha-448204-m04_ha-448204-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 cp ha-448204-m04:/home/docker/cp-test.txt ha-448204-m03:/home/docker/cp-test_ha-448204-m04_ha-448204-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 ssh -n ha-448204-m03 "sudo cat /home/docker/cp-test_ha-448204-m04_ha-448204-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 node stop m02 --alsologtostderr -v 5: (12.092282999s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5: exit status 7 (751.630059ms)

                                                
                                                
-- stdout --
	ha-448204
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-448204-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-448204-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-448204-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:53:56.218620 1957275 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:53:56.218739 1957275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:53:56.218749 1957275 out.go:374] Setting ErrFile to fd 2...
	I1222 00:53:56.218754 1957275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:53:56.219098 1957275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:53:56.219403 1957275 out.go:368] Setting JSON to false
	I1222 00:53:56.219431 1957275 mustload.go:66] Loading cluster: ha-448204
	I1222 00:53:56.220210 1957275 config.go:182] Loaded profile config "ha-448204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:53:56.220228 1957275 status.go:174] checking status of ha-448204 ...
	I1222 00:53:56.221104 1957275 cli_runner.go:164] Run: docker container inspect ha-448204 --format={{.State.Status}}
	I1222 00:53:56.225143 1957275 notify.go:221] Checking for updates...
	I1222 00:53:56.248166 1957275 status.go:371] ha-448204 host status = "Running" (err=<nil>)
	I1222 00:53:56.248190 1957275 host.go:66] Checking if "ha-448204" exists ...
	I1222 00:53:56.248565 1957275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-448204
	I1222 00:53:56.278962 1957275 host.go:66] Checking if "ha-448204" exists ...
	I1222 00:53:56.279254 1957275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:53:56.279298 1957275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-448204
	I1222 00:53:56.298509 1957275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38476 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/ha-448204/id_rsa Username:docker}
	I1222 00:53:56.394860 1957275 ssh_runner.go:195] Run: systemctl --version
	I1222 00:53:56.400998 1957275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:53:56.414044 1957275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:53:56.484023 1957275 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-22 00:53:56.47435219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:53:56.484611 1957275 kubeconfig.go:125] found "ha-448204" server: "https://192.168.49.254:8443"
	I1222 00:53:56.484653 1957275 api_server.go:166] Checking apiserver status ...
	I1222 00:53:56.484702 1957275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:53:56.496733 1957275 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup
	I1222 00:53:56.505963 1957275 api_server.go:182] apiserver freezer: "11:freezer:/docker/780cc6983d8970f3c8dfc0ceb5608c2d57f00135593850881378eb46eb985e48/crio/crio-5b6ad6ede5efb0e1b767294d28d52c4b42aa64ba44ca2cfe008de907349c1e56"
	I1222 00:53:56.506038 1957275 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/780cc6983d8970f3c8dfc0ceb5608c2d57f00135593850881378eb46eb985e48/crio/crio-5b6ad6ede5efb0e1b767294d28d52c4b42aa64ba44ca2cfe008de907349c1e56/freezer.state
	I1222 00:53:56.514487 1957275 api_server.go:204] freezer state: "THAWED"
	I1222 00:53:56.514516 1957275 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1222 00:53:56.522560 1957275 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1222 00:53:56.522596 1957275 status.go:463] ha-448204 apiserver status = Running (err=<nil>)
	I1222 00:53:56.522607 1957275 status.go:176] ha-448204 status: &{Name:ha-448204 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:53:56.522623 1957275 status.go:174] checking status of ha-448204-m02 ...
	I1222 00:53:56.522951 1957275 cli_runner.go:164] Run: docker container inspect ha-448204-m02 --format={{.State.Status}}
	I1222 00:53:56.539621 1957275 status.go:371] ha-448204-m02 host status = "Stopped" (err=<nil>)
	I1222 00:53:56.539646 1957275 status.go:384] host is not running, skipping remaining checks
	I1222 00:53:56.539653 1957275 status.go:176] ha-448204-m02 status: &{Name:ha-448204-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:53:56.539674 1957275 status.go:174] checking status of ha-448204-m03 ...
	I1222 00:53:56.539979 1957275 cli_runner.go:164] Run: docker container inspect ha-448204-m03 --format={{.State.Status}}
	I1222 00:53:56.557069 1957275 status.go:371] ha-448204-m03 host status = "Running" (err=<nil>)
	I1222 00:53:56.557095 1957275 host.go:66] Checking if "ha-448204-m03" exists ...
	I1222 00:53:56.557561 1957275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-448204-m03
	I1222 00:53:56.574245 1957275 host.go:66] Checking if "ha-448204-m03" exists ...
	I1222 00:53:56.574642 1957275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:53:56.574690 1957275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-448204-m03
	I1222 00:53:56.596765 1957275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38486 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/ha-448204-m03/id_rsa Username:docker}
	I1222 00:53:56.698852 1957275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:53:56.712398 1957275 kubeconfig.go:125] found "ha-448204" server: "https://192.168.49.254:8443"
	I1222 00:53:56.712427 1957275 api_server.go:166] Checking apiserver status ...
	I1222 00:53:56.712490 1957275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:53:56.724059 1957275 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	I1222 00:53:56.732872 1957275 api_server.go:182] apiserver freezer: "11:freezer:/docker/551aa117907816bb73bb335ff8764d66588fa7638026fa3eb13327c8d0f14052/crio/crio-7f0d06a82162f84107e7cd3569b7233cfce03affb446438afc038b0607683b55"
	I1222 00:53:56.733013 1957275 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/551aa117907816bb73bb335ff8764d66588fa7638026fa3eb13327c8d0f14052/crio/crio-7f0d06a82162f84107e7cd3569b7233cfce03affb446438afc038b0607683b55/freezer.state
	I1222 00:53:56.740234 1957275 api_server.go:204] freezer state: "THAWED"
	I1222 00:53:56.740263 1957275 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1222 00:53:56.748361 1957275 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1222 00:53:56.748390 1957275 status.go:463] ha-448204-m03 apiserver status = Running (err=<nil>)
	I1222 00:53:56.748399 1957275 status.go:176] ha-448204-m03 status: &{Name:ha-448204-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:53:56.748444 1957275 status.go:174] checking status of ha-448204-m04 ...
	I1222 00:53:56.748775 1957275 cli_runner.go:164] Run: docker container inspect ha-448204-m04 --format={{.State.Status}}
	I1222 00:53:56.766713 1957275 status.go:371] ha-448204-m04 host status = "Running" (err=<nil>)
	I1222 00:53:56.766743 1957275 host.go:66] Checking if "ha-448204-m04" exists ...
	I1222 00:53:56.767033 1957275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-448204-m04
	I1222 00:53:56.786098 1957275 host.go:66] Checking if "ha-448204-m04" exists ...
	I1222 00:53:56.786393 1957275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:53:56.786442 1957275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-448204-m04
	I1222 00:53:56.806760 1957275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38491 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/ha-448204-m04/id_rsa Username:docker}
	I1222 00:53:56.902401 1957275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:53:56.914455 1957275 status.go:176] ha-448204-m04 status: &{Name:ha-448204-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 node start m02 --alsologtostderr -v 5: (18.302104736s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5: (1.155567502s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 stop --alsologtostderr -v 5: (26.635460174s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 start --wait true --alsologtostderr -v 5
E1222 00:55:46.888176 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:55:47.970779 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:56:15.654060 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:56:25.759812 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 start --wait true --alsologtostderr -v 5: (1m45.223736002s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 node delete m03 --alsologtostderr -v 5: (11.168809443s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 stop --alsologtostderr -v 5: (35.867002439s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5: exit status 7 (106.76646ms)

                                                
                                                
-- stdout --
	ha-448204
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-448204-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-448204-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:57:19.190159 1969123 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:57:19.190306 1969123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:57:19.190337 1969123 out.go:374] Setting ErrFile to fd 2...
	I1222 00:57:19.190349 1969123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:57:19.190612 1969123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 00:57:19.190826 1969123 out.go:368] Setting JSON to false
	I1222 00:57:19.190867 1969123 mustload.go:66] Loading cluster: ha-448204
	I1222 00:57:19.190964 1969123 notify.go:221] Checking for updates...
	I1222 00:57:19.191315 1969123 config.go:182] Loaded profile config "ha-448204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 00:57:19.191340 1969123 status.go:174] checking status of ha-448204 ...
	I1222 00:57:19.192194 1969123 cli_runner.go:164] Run: docker container inspect ha-448204 --format={{.State.Status}}
	I1222 00:57:19.210554 1969123 status.go:371] ha-448204 host status = "Stopped" (err=<nil>)
	I1222 00:57:19.210580 1969123 status.go:384] host is not running, skipping remaining checks
	I1222 00:57:19.210587 1969123 status.go:176] ha-448204 status: &{Name:ha-448204 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:57:19.210610 1969123 status.go:174] checking status of ha-448204-m02 ...
	I1222 00:57:19.210926 1969123 cli_runner.go:164] Run: docker container inspect ha-448204-m02 --format={{.State.Status}}
	I1222 00:57:19.227737 1969123 status.go:371] ha-448204-m02 host status = "Stopped" (err=<nil>)
	I1222 00:57:19.227766 1969123 status.go:384] host is not running, skipping remaining checks
	I1222 00:57:19.227774 1969123 status.go:176] ha-448204-m02 status: &{Name:ha-448204-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:57:19.227798 1969123 status.go:174] checking status of ha-448204-m04 ...
	I1222 00:57:19.228098 1969123 cli_runner.go:164] Run: docker container inspect ha-448204-m04 --format={{.State.Status}}
	I1222 00:57:19.251537 1969123 status.go:371] ha-448204-m04 host status = "Stopped" (err=<nil>)
	I1222 00:57:19.251558 1969123 status.go:384] host is not running, skipping remaining checks
	I1222 00:57:19.251565 1969123 status.go:176] ha-448204-m04 status: &{Name:ha-448204-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1222 00:57:43.837448 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m21.098507188s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (86.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 node add --control-plane --alsologtostderr -v 5: (1m25.273089155s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-448204 status --alsologtostderr -v 5: (1.059220312s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (86.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.061502402s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-069941 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1222 01:00:47.971166 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-069941 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.126604673s)
--- PASS: TestJSONOutput/start/Command (50.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-069941 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-069941 --output=json --user=testUser: (5.860301028s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-868837 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-868837 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (87.506738ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7af8a7aa-1457-41c5-996a-4172567a54fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-868837] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"77504089-2f71-4084-8a51-7d272124c529","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22179"}}
	{"specversion":"1.0","id":"f8deb769-37f1-40ee-93e0-18c713f2173c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75501016-ad47-4820-b854-29f043e0b544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig"}}
	{"specversion":"1.0","id":"4e744bc2-23d2-4908-894b-fabc1034aed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube"}}
	{"specversion":"1.0","id":"0c285c2d-eceb-4ae6-a8c2-62c6c9f7a817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"95bc18c2-d486-4a2e-843d-950f5b3c85c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"faa00539-0a7e-4702-b531-f951aa9a79f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-868837" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-868837
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-098789 --network=
E1222 01:01:25.759697 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-098789 --network=: (38.061348145s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-098789" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-098789
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-098789: (2.243382587s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-523340 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-523340 --network=bridge: (32.769581607s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-523340" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-523340
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-523340: (2.064353927s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.86s)

                                                
                                    
x
+
TestKicExistingNetwork (32.62s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1222 01:02:37.640129 1881219 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1222 01:02:37.656363 1881219 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1222 01:02:37.657305 1881219 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1222 01:02:37.657353 1881219 cli_runner.go:164] Run: docker network inspect existing-network
W1222 01:02:37.673829 1881219 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1222 01:02:37.673861 1881219 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1222 01:02:37.673880 1881219 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1222 01:02:37.673997 1881219 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1222 01:02:37.693769 1881219 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-960128ceeef9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:d0:32:cb:b9:20} reservation:<nil>}
I1222 01:02:37.694148 1881219 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400232a550}
I1222 01:02:37.694174 1881219 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1222 01:02:37.694229 1881219 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1222 01:02:37.754661 1881219 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-692906 --network=existing-network
E1222 01:02:43.837472 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-692906 --network=existing-network: (30.233425622s)
helpers_test.go:176: Cleaning up "existing-network-692906" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-692906
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-692906: (2.239580206s)
I1222 01:03:10.244737 1881219 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.62s)

                                                
                                    
x
+
TestKicCustomSubnet (36.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-224365 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-224365 --subnet=192.168.60.0/24: (34.5489676s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-224365 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-224365" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-224365
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-224365: (2.301214367s)
--- PASS: TestKicCustomSubnet (36.87s)

                                                
                                    
x
+
TestKicStaticIP (33.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-657776 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-657776 --static-ip=192.168.200.200: (31.360409168s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-657776 ip
helpers_test.go:176: Cleaning up "static-ip-657776" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-657776
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-657776: (2.237519943s)
--- PASS: TestKicStaticIP (33.76s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-957716 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-957716 --driver=docker  --container-runtime=crio: (34.319716867s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-960842 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-960842 --driver=docker  --container-runtime=crio: (32.218013144s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-957716
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-960842
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-960842" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-960842
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-960842: (2.098878346s)
helpers_test.go:176: Cleaning up "first-957716" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-957716
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-957716: (2.416876534s)
--- PASS: TestMinikubeProfile (72.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-207422 --memory=3072 --mount-string /tmp/TestMountStartserial1478766058/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-207422 --memory=3072 --mount-string /tmp/TestMountStartserial1478766058/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.111737585s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-207422 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-209235 --memory=3072 --mount-string /tmp/TestMountStartserial1478766058/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1222 01:05:47.971165 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-209235 --memory=3072 --mount-string /tmp/TestMountStartserial1478766058/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.713862559s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-209235 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-207422 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-207422 --alsologtostderr -v=5: (1.697439512s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-209235 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-209235
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-209235: (1.293559456s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-209235
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-209235: (7.02909753s)
--- PASS: TestMountStart/serial/RestartStopped (8.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-209235 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-171197 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1222 01:06:08.819492 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:06:25.759608 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:07:11.014736 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-171197 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.650839085s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-171197 -- rollout status deployment/busybox: (2.726595561s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-bp5t5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-kzqms -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-bp5t5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-kzqms -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-bp5t5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-kzqms -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-bp5t5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-bp5t5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-kzqms -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-171197 -- exec busybox-7b57f96db7-kzqms -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-171197 -v=5 --alsologtostderr
E1222 01:07:43.835615 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-171197 -v=5 --alsologtostderr: (28.56232471s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-171197 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp testdata/cp-test.txt multinode-171197:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3994095263/001/cp-test_multinode-171197.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197:/home/docker/cp-test.txt multinode-171197-m02:/home/docker/cp-test_multinode-171197_multinode-171197-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m02 "sudo cat /home/docker/cp-test_multinode-171197_multinode-171197-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197:/home/docker/cp-test.txt multinode-171197-m03:/home/docker/cp-test_multinode-171197_multinode-171197-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m03 "sudo cat /home/docker/cp-test_multinode-171197_multinode-171197-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp testdata/cp-test.txt multinode-171197-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3994095263/001/cp-test_multinode-171197-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197-m02:/home/docker/cp-test.txt multinode-171197:/home/docker/cp-test_multinode-171197-m02_multinode-171197.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197 "sudo cat /home/docker/cp-test_multinode-171197-m02_multinode-171197.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197-m02:/home/docker/cp-test.txt multinode-171197-m03:/home/docker/cp-test_multinode-171197-m02_multinode-171197-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m03 "sudo cat /home/docker/cp-test_multinode-171197-m02_multinode-171197-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp testdata/cp-test.txt multinode-171197-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3994095263/001/cp-test_multinode-171197-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197-m03:/home/docker/cp-test.txt multinode-171197:/home/docker/cp-test_multinode-171197-m03_multinode-171197.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197 "sudo cat /home/docker/cp-test_multinode-171197-m03_multinode-171197.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 cp multinode-171197-m03:/home/docker/cp-test.txt multinode-171197-m02:/home/docker/cp-test_multinode-171197-m03_multinode-171197-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 ssh -n multinode-171197-m02 "sudo cat /home/docker/cp-test_multinode-171197-m03_multinode-171197-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-171197 node stop m03: (1.307891694s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-171197 status: exit status 7 (522.163339ms)

                                                
                                                
-- stdout --
	multinode-171197
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-171197-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-171197-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-171197 status --alsologtostderr: exit status 7 (532.569582ms)

                                                
                                                
-- stdout --
	multinode-171197
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-171197-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-171197-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:08:11.158497 2019811 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:08:11.158649 2019811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:08:11.158663 2019811 out.go:374] Setting ErrFile to fd 2...
	I1222 01:08:11.158669 2019811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:08:11.158949 2019811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:08:11.159182 2019811 out.go:368] Setting JSON to false
	I1222 01:08:11.159234 2019811 mustload.go:66] Loading cluster: multinode-171197
	I1222 01:08:11.159306 2019811 notify.go:221] Checking for updates...
	I1222 01:08:11.160613 2019811 config.go:182] Loaded profile config "multinode-171197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:08:11.160645 2019811 status.go:174] checking status of multinode-171197 ...
	I1222 01:08:11.161370 2019811 cli_runner.go:164] Run: docker container inspect multinode-171197 --format={{.State.Status}}
	I1222 01:08:11.180405 2019811 status.go:371] multinode-171197 host status = "Running" (err=<nil>)
	I1222 01:08:11.180431 2019811 host.go:66] Checking if "multinode-171197" exists ...
	I1222 01:08:11.180720 2019811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-171197
	I1222 01:08:11.213790 2019811 host.go:66] Checking if "multinode-171197" exists ...
	I1222 01:08:11.214103 2019811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:08:11.214157 2019811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-171197
	I1222 01:08:11.234543 2019811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38596 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/multinode-171197/id_rsa Username:docker}
	I1222 01:08:11.330861 2019811 ssh_runner.go:195] Run: systemctl --version
	I1222 01:08:11.337259 2019811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:08:11.350585 2019811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:08:11.411308 2019811 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-22 01:08:11.401697862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:08:11.411850 2019811 kubeconfig.go:125] found "multinode-171197" server: "https://192.168.67.2:8443"
	I1222 01:08:11.411882 2019811 api_server.go:166] Checking apiserver status ...
	I1222 01:08:11.411925 2019811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:08:11.423320 2019811 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	I1222 01:08:11.431353 2019811 api_server.go:182] apiserver freezer: "11:freezer:/docker/0d3ea8c81e08d17c0d14944bbc51a791453ecf8df93fdddd8a18e2678b2735fc/crio/crio-6202cf43baa28244b1511a1237fe10fe4efc6aa8146508736a800e005ef349cd"
	I1222 01:08:11.431431 2019811 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0d3ea8c81e08d17c0d14944bbc51a791453ecf8df93fdddd8a18e2678b2735fc/crio/crio-6202cf43baa28244b1511a1237fe10fe4efc6aa8146508736a800e005ef349cd/freezer.state
	I1222 01:08:11.439008 2019811 api_server.go:204] freezer state: "THAWED"
	I1222 01:08:11.439038 2019811 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1222 01:08:11.447338 2019811 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1222 01:08:11.447366 2019811 status.go:463] multinode-171197 apiserver status = Running (err=<nil>)
	I1222 01:08:11.447377 2019811 status.go:176] multinode-171197 status: &{Name:multinode-171197 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 01:08:11.447394 2019811 status.go:174] checking status of multinode-171197-m02 ...
	I1222 01:08:11.447710 2019811 cli_runner.go:164] Run: docker container inspect multinode-171197-m02 --format={{.State.Status}}
	I1222 01:08:11.464428 2019811 status.go:371] multinode-171197-m02 host status = "Running" (err=<nil>)
	I1222 01:08:11.464454 2019811 host.go:66] Checking if "multinode-171197-m02" exists ...
	I1222 01:08:11.464752 2019811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-171197-m02
	I1222 01:08:11.481036 2019811 host.go:66] Checking if "multinode-171197-m02" exists ...
	I1222 01:08:11.481450 2019811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:08:11.481545 2019811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-171197-m02
	I1222 01:08:11.499042 2019811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38601 SSHKeyPath:/home/jenkins/minikube-integration/22179-1874095/.minikube/machines/multinode-171197-m02/id_rsa Username:docker}
	I1222 01:08:11.591713 2019811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:08:11.604277 2019811 status.go:176] multinode-171197-m02 status: &{Name:multinode-171197-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1222 01:08:11.604310 2019811 status.go:174] checking status of multinode-171197-m03 ...
	I1222 01:08:11.604628 2019811 cli_runner.go:164] Run: docker container inspect multinode-171197-m03 --format={{.State.Status}}
	I1222 01:08:11.621316 2019811 status.go:371] multinode-171197-m03 host status = "Stopped" (err=<nil>)
	I1222 01:08:11.621382 2019811 status.go:384] host is not running, skipping remaining checks
	I1222 01:08:11.621388 2019811 status.go:176] multinode-171197-m03 status: &{Name:multinode-171197-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-171197 node start m03 -v=5 --alsologtostderr: (7.343530678s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-171197
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-171197
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-171197: (25.13390352s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-171197 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-171197 --wait=true -v=5 --alsologtostderr: (47.320937027s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-171197
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-171197 node delete m03: (4.913999975s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-171197 stop: (23.805830763s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-171197 status: exit status 7 (92.462535ms)

                                                
                                                
-- stdout --
	multinode-171197
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-171197-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-171197 status --alsologtostderr: exit status 7 (97.730187ms)

                                                
                                                
-- stdout --
	multinode-171197
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-171197-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:10:01.874667 2027663 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:10:01.874807 2027663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:10:01.874837 2027663 out.go:374] Setting ErrFile to fd 2...
	I1222 01:10:01.874851 2027663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:10:01.875135 2027663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:10:01.875365 2027663 out.go:368] Setting JSON to false
	I1222 01:10:01.875418 2027663 mustload.go:66] Loading cluster: multinode-171197
	I1222 01:10:01.875497 2027663 notify.go:221] Checking for updates...
	I1222 01:10:01.875837 2027663 config.go:182] Loaded profile config "multinode-171197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:10:01.875855 2027663 status.go:174] checking status of multinode-171197 ...
	I1222 01:10:01.876432 2027663 cli_runner.go:164] Run: docker container inspect multinode-171197 --format={{.State.Status}}
	I1222 01:10:01.897095 2027663 status.go:371] multinode-171197 host status = "Stopped" (err=<nil>)
	I1222 01:10:01.897120 2027663 status.go:384] host is not running, skipping remaining checks
	I1222 01:10:01.897127 2027663 status.go:176] multinode-171197 status: &{Name:multinode-171197 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 01:10:01.897157 2027663 status.go:174] checking status of multinode-171197-m02 ...
	I1222 01:10:01.897509 2027663 cli_runner.go:164] Run: docker container inspect multinode-171197-m02 --format={{.State.Status}}
	I1222 01:10:01.926090 2027663 status.go:371] multinode-171197-m02 host status = "Stopped" (err=<nil>)
	I1222 01:10:01.926114 2027663 status.go:384] host is not running, skipping remaining checks
	I1222 01:10:01.926122 2027663 status.go:176] multinode-171197-m02 status: &{Name:multinode-171197-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-171197 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1222 01:10:47.971446 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-171197 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.811228912s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-171197 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-171197
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-171197-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-171197-m02 --driver=docker  --container-runtime=crio: exit status 14 (101.293382ms)

                                                
                                                
-- stdout --
	* [multinode-171197-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-171197-m02' is duplicated with machine name 'multinode-171197-m02' in profile 'multinode-171197'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-171197-m03 --driver=docker  --container-runtime=crio
E1222 01:11:25.759805 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-171197-m03 --driver=docker  --container-runtime=crio: (33.337589949s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-171197
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-171197: exit status 80 (373.13836ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-171197 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-171197-m03 already exists in multinode-171197-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-171197-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-171197-m03: (2.075229022s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.93s)

                                                
                                    
x
+
TestScheduledStopUnix (110.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-389645 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-389645 --memory=3072 --driver=docker  --container-runtime=crio: (33.601847312s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-389645 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 01:12:08.359978 2036169 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:12:08.360119 2036169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:12:08.360135 2036169 out.go:374] Setting ErrFile to fd 2...
	I1222 01:12:08.360140 2036169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:12:08.360437 2036169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:12:08.360716 2036169 out.go:368] Setting JSON to false
	I1222 01:12:08.360845 2036169 mustload.go:66] Loading cluster: scheduled-stop-389645
	I1222 01:12:08.361194 2036169 config.go:182] Loaded profile config "scheduled-stop-389645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:12:08.361268 2036169 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/scheduled-stop-389645/config.json ...
	I1222 01:12:08.361603 2036169 mustload.go:66] Loading cluster: scheduled-stop-389645
	I1222 01:12:08.361732 2036169 config.go:182] Loaded profile config "scheduled-stop-389645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-389645 -n scheduled-stop-389645
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 01:12:08.796886 2036258 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:12:08.797458 2036258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:12:08.797510 2036258 out.go:374] Setting ErrFile to fd 2...
	I1222 01:12:08.797532 2036258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:12:08.797833 2036258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:12:08.798210 2036258 out.go:368] Setting JSON to false
	I1222 01:12:08.799366 2036258 daemonize_unix.go:73] killing process 2036197 as it is an old scheduled stop
	I1222 01:12:08.799545 2036258 mustload.go:66] Loading cluster: scheduled-stop-389645
	I1222 01:12:08.799962 2036258 config.go:182] Loaded profile config "scheduled-stop-389645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:12:08.800061 2036258 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/scheduled-stop-389645/config.json ...
	I1222 01:12:08.800252 2036258 mustload.go:66] Loading cluster: scheduled-stop-389645
	I1222 01:12:08.800391 2036258 config.go:182] Loaded profile config "scheduled-stop-389645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1222 01:12:08.809080 1881219 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/scheduled-stop-389645/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-389645 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1222 01:12:26.888456 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-389645 -n scheduled-stop-389645
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-389645
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-389645 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 01:12:34.737148 2036735 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:12:34.737258 2036735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:12:34.737273 2036735 out.go:374] Setting ErrFile to fd 2...
	I1222 01:12:34.737278 2036735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:12:34.737633 2036735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:12:34.737918 2036735 out.go:368] Setting JSON to false
	I1222 01:12:34.738011 2036735 mustload.go:66] Loading cluster: scheduled-stop-389645
	I1222 01:12:34.738605 2036735 config.go:182] Loaded profile config "scheduled-stop-389645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:12:34.738696 2036735 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/scheduled-stop-389645/config.json ...
	I1222 01:12:34.738885 2036735 mustload.go:66] Loading cluster: scheduled-stop-389645
	I1222 01:12:34.739028 2036735 config.go:182] Loaded profile config "scheduled-stop-389645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1222 01:12:43.839870 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-389645
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-389645: exit status 7 (65.953233ms)

                                                
                                                
-- stdout --
	scheduled-stop-389645
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-389645 -n scheduled-stop-389645
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-389645 -n scheduled-stop-389645: exit status 7 (72.575036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-389645" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-389645
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-389645: (5.447091853s)
--- PASS: TestScheduledStopUnix (110.63s)

                                                
                                    
x
+
TestInsufficientStorage (13.06s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-993583 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-993583 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.522205346s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"309cceba-4768-4360-8c0f-92a0d066ce29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-993583] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6da30f9f-29fe-4198-84e1-28fa48331ab3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22179"}}
	{"specversion":"1.0","id":"b4150981-74b1-4971-8be8-1f159922d290","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6b549b1c-f0cd-4a25-b87a-782a1151a1c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig"}}
	{"specversion":"1.0","id":"53d9ac52-4b28-4ae1-a85d-dfea57a80332","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube"}}
	{"specversion":"1.0","id":"ca641f7a-aee4-4c6d-ab47-f5f677ae4b4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"441a00b9-3167-4d9b-877c-2ce05081722e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"102efb90-231f-486d-973e-4723ca168820","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"70e3f395-82d3-41dc-ab97-77c28f23b96b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"15d4732b-3fb9-4ca9-bac7-56ce6b94cb51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8dcbe97-1561-4e5f-9260-f2c99e254c35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"38bb3a2a-952c-4fa3-a15d-a53aed38fb36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-993583\" primary control-plane node in \"insufficient-storage-993583\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf2bb962-5d31-43ca-84ac-37572a14b1a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766219634-22260 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fa3bcb4-3765-4998-96be-18d95ca1f91e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2dc5ace2-331e-4d3e-a1a8-5a77d7259df8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-993583 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-993583 --output=json --layout=cluster: exit status 7 (305.442765ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-993583","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-993583","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:13:36.142668 2038606 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-993583" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-993583 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-993583 --output=json --layout=cluster: exit status 7 (286.283673ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-993583","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-993583","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:13:36.430805 2038674 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-993583" does not appear in /home/jenkins/minikube-integration/22179-1874095/kubeconfig
	E1222 01:13:36.440760 2038674 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/insufficient-storage-993583/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-993583" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-993583
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-993583: (1.950274351s)
--- PASS: TestInsufficientStorage (13.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (301.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3450073331 start -p running-upgrade-834402 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3450073331 start -p running-upgrade-834402 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.550606714s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-834402 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1222 01:21:25.759406 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:22:43.836273 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:22:48.819840 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:23:51.015087 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:25:47.972229 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-834402 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.217426013s)
helpers_test.go:176: Cleaning up "running-upgrade-834402" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-834402
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-834402: (1.95118192s)
--- PASS: TestRunningBinaryUpgrade (301.88s)

                                                
                                    
x
+
TestMissingContainerUpgrade (111.48s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.4109916232 start -p missing-upgrade-158754 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.4109916232 start -p missing-upgrade-158754 --memory=3072 --driver=docker  --container-runtime=crio: (58.969788482s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-158754
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-158754
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-158754 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-158754 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.463560854s)
helpers_test.go:176: Cleaning up "missing-upgrade-158754" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-158754
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-158754: (2.250987576s)
--- PASS: TestMissingContainerUpgrade (111.48s)

                                                
                                    
x
+
TestPause/serial/Start (65.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-583340 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-583340 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m5.885388322s)
--- PASS: TestPause/serial/Start (65.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-583340 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-583340 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.28606702s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (314.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1752500612 start -p stopped-upgrade-608183 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1222 01:15:47.971169 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1752500612 start -p stopped-upgrade-608183 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.798573202s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1752500612 -p stopped-upgrade-608183 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1752500612 -p stopped-upgrade-608183 stop: (1.236127215s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-608183 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1222 01:16:25.759357 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:17:43.836046 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-608183 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.749784809s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (314.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-608183
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-608183: (1.84333853s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.84s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (68.26s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-488704 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1222 01:26:25.759595 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-488704 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m1.334924281s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-488704 image pull public.ecr.aws/docker/library/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-488704
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-488704: (6.031845076s)
--- PASS: TestPreload/Start-NoPreload-PullImage (68.26s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (56.51s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:72: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-488704 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1222 01:27:43.835825 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:72: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-488704 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (56.273404371s)
preload_test.go:77: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-488704 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (56.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-346588 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-346588 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (109.68806ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-346588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-346588 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-346588 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.283413658s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-346588 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-678854 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-678854 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (265.725176ms)

                                                
                                                
-- stdout --
	* [false-678854] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:28:43.327917 2093785 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:28:43.328021 2093785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:43.328031 2093785 out.go:374] Setting ErrFile to fd 2...
	I1222 01:28:43.328036 2093785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:43.328282 2093785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1874095/.minikube/bin
	I1222 01:28:43.328701 2093785 out.go:368] Setting JSON to false
	I1222 01:28:43.329582 2093785 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115873,"bootTime":1766251051,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1222 01:28:43.329652 2093785 start.go:143] virtualization:  
	I1222 01:28:43.333278 2093785 out.go:179] * [false-678854] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:28:43.337256 2093785 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:28:43.337416 2093785 notify.go:221] Checking for updates...
	I1222 01:28:43.343228 2093785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:28:43.346125 2093785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1874095/kubeconfig
	I1222 01:28:43.348967 2093785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1874095/.minikube
	I1222 01:28:43.352034 2093785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:28:43.354910 2093785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:28:43.358732 2093785 config.go:182] Loaded profile config "NoKubernetes-346588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1222 01:28:43.358844 2093785 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:28:43.407905 2093785 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:28:43.408031 2093785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:43.502021 2093785 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:28:43.492144082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:43.502145 2093785 docker.go:319] overlay module found
	I1222 01:28:43.505258 2093785 out.go:179] * Using the docker driver based on user configuration
	I1222 01:28:43.508224 2093785 start.go:309] selected driver: docker
	I1222 01:28:43.508244 2093785 start.go:928] validating driver "docker" against <nil>
	I1222 01:28:43.508265 2093785 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:28:43.512204 2093785 out.go:203] 
	W1222 01:28:43.514982 2093785 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1222 01:28:43.517873 2093785 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-678854 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-678854" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-678854

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678854"

                                                
                                                
----------------------- debugLogs end: false-678854 [took: 4.319409984s] --------------------------------
helpers_test.go:176: Cleaning up "false-678854" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-678854
--- PASS: TestNetworkPlugins/group/false (4.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.533039381s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-346588 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-346588 status -o json: exit status 2 (398.011357ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-346588","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-346588
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-346588: (2.082527235s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-346588 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.989930764s)
--- PASS: TestNoKubernetes/serial/Start (8.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22179-1874095/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-346588 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-346588 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.492876ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-346588
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-346588: (1.34102655s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-346588 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-346588 --driver=docker  --container-runtime=crio: (7.503149409s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-346588 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-346588 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.031128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1222 01:31:25.759649 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.958698938s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-535197 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [873a5db1-c241-4c03-8906-ba8ace1623cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [873a5db1-c241-4c03-8906-ba8ace1623cf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004049291s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-535197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-535197 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-535197 --alsologtostderr -v=3: (12.031266468s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-535197 -n old-k8s-version-535197
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-535197 -n old-k8s-version-535197: exit status 7 (71.201416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-535197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-535197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.297204592s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-535197 -n old-k8s-version-535197
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-bhzmc" [143daf7b-4993-46cb-bc29-44e6d8f97352] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003690607s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-bhzmc" [143daf7b-4993-46cb-bc29-44e6d8f97352] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00316962s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-535197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-535197 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (52.273270225s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-235567 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7bd951a1-ccf0-44f9-87aa-0d7ba288da1e] Pending
helpers_test.go:353: "busybox" [7bd951a1-ccf0-44f9-87aa-0d7ba288da1e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7bd951a1-ccf0-44f9-87aa-0d7ba288da1e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004059381s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-235567 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-235567 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-235567 --alsologtostderr -v=3: (12.01149935s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-235567 -n embed-certs-235567
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-235567 -n embed-certs-235567: exit status 7 (67.925807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-235567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
E1222 01:35:47.971667 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-837733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-235567 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (50.220362928s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-235567 -n embed-certs-235567
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5424p" [a2d7d96c-d8fa-4a6d-ad57-1208744fa0aa] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003382357s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5424p" [a2d7d96c-d8fa-4a6d-ad57-1208744fa0aa] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005419179s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-235567 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-235567 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
E1222 01:36:25.759443 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/addons-568394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (51.367690949s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-733235 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [924925f3-1e1b-431a-b6ed-5a473df1a8e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1222 01:37:03.284455 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:03.290143 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:03.300656 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:03.320910 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:03.361166 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:03.441505 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:03.602025 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:03.922284 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:04.562576 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [924925f3-1e1b-431a-b6ed-5a473df1a8e8] Running
E1222 01:37:05.843660 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:08.403895 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003812692s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-733235 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-733235 --alsologtostderr -v=3
E1222 01:37:13.524731 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:23.765741 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-733235 --alsologtostderr -v=3: (12.021073345s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235: exit status 7 (71.542847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-733235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
E1222 01:37:43.836256 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/functional-885873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:37:44.246472 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-733235 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (49.284931159s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-733235 -n default-k8s-diff-port-733235
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-2sjcl" [585268bd-d9fd-4358-8e2e-6805b573af31] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003095756s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-2sjcl" [585268bd-d9fd-4358-8e2e-6805b573af31] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003810869s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-733235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1222 01:38:25.206622 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/old-k8s-version-535197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-733235 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-491054 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-491054 --alsologtostderr -v=3: (1.306036464s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491054 -n no-preload-491054: exit status 7 (85.511815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-491054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-704217 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-704217 --alsologtostderr -v=3: (1.316365841s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-704217 -n newest-cni-704217: exit status 7 (121.058561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-704217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-704217 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (6.08s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-825516 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-825516 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (5.911448642s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-825516" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-825516
--- PASS: TestPreload/PreloadSrc/gcs (6.08s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (5.74s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-418183 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-418183 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (5.550965155s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-418183" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-418183
--- PASS: TestPreload/PreloadSrc/github (5.74s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.44s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-942378 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-942378" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-942378
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.506565083s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-678854 "pgrep -a kubelet"
I1222 01:56:12.914060 1881219 config.go:182] Loaded profile config "auto-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-678854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-kcxth" [00f74afd-c978-4352-ab9b-f027c37e585b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-kcxth" [00f74afd-c978-4352-ab9b-f027c37e585b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004100228s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-678854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.754634585s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-mtcrk" [5cb780bc-a405-4341-8bba-506d5cfef00c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003729336s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-678854 "pgrep -a kubelet"
I1222 01:58:13.831870 1881219 config.go:182] Loaded profile config "kindnet-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-678854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6hjls" [6411fbe3-db6a-476a-a86a-85faccb0787b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6hjls" [6411fbe3-db6a-476a-a86a-85faccb0787b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003401654s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-678854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m17.757628352s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-b7jsg" [2b8d21a4-02b4-4b2b-849e-073fdb795f33] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003875374s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-678854 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-678854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ztdr2" [1d333fd0-6fb2-4248-b426-568482879cf6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ztdr2" [1d333fd0-6fb2-4248-b426-568482879cf6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004555909s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-678854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.21713862s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-678854 "pgrep -a kubelet"
I1222 02:01:42.665465 1881219 config.go:182] Loaded profile config "custom-flannel-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-678854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-kfhkw" [7038b605-8886-4878-85cb-16ecddd5f3b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-kfhkw" [7038b605-8886-4878-85cb-16ecddd5f3b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003363419s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-678854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m12.86801368s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1222 02:03:07.553128 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:07.558390 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:07.568694 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:07.589683 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:07.629928 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:07.710193 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:07.870563 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:08.191733 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:08.832903 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:10.114024 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:12.674555 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 02:03:17.795300 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.013023087s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-678854 "pgrep -a kubelet"
I1222 02:03:27.732078 1881219 config.go:182] Loaded profile config "enable-default-cni-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-678854 replace --force -f testdata/netcat-deployment.yaml
E1222 02:03:28.035894 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/kindnet-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-g9kcw" [6797f3e6-d250-4852-98fa-1c46708fd76d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-g9kcw" [6797f3e6-d250-4852-98fa-1c46708fd76d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004137606s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-678854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-hw9l8" [7b8eac00-0cdb-4049-b2b4-9c0708830af4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006263847s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-678854 "pgrep -a kubelet"
E1222 02:03:57.030916 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/auto-678854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1222 02:03:57.120287 1881219 config.go:182] Loaded profile config "flannel-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-678854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-shgrm" [79c07ba5-3748-47c4-9313-c37e07377cd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-shgrm" [79c07ba5-3748-47c4-9313-c37e07377cd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003111456s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-678854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (52.952512348s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-678854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-678854 "pgrep -a kubelet"
I1222 02:04:52.864193 1881219 config.go:182] Loaded profile config "bridge-678854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-678854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dqn2q" [15f6d73d-b8eb-43c6-b86e-5961e4c7780a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-dqn2q" [15f6d73d-b8eb-43c6-b86e-5961e4c7780a] Running
E1222 02:05:00.695556 1881219 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1874095/.minikube/profiles/no-preload-491054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004119588s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-678854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-678854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/416)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0.43
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
372 TestStartStop/group/disable-driver-mounts 0.17
382 TestNetworkPlugins/group/kubenet 5.15
390 TestNetworkPlugins/group/cilium 5
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-892972 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-892972" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-892972
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-654908" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-654908
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-678854 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-678854" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-678854

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678854"

                                                
                                                
----------------------- debugLogs end: kubenet-678854 [took: 4.967710235s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-678854" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-678854
--- SKIP: TestNetworkPlugins/group/kubenet (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-678854 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-678854" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-678854

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-678854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678854"

                                                
                                                
----------------------- debugLogs end: cilium-678854 [took: 4.782576031s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-678854" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-678854
--- SKIP: TestNetworkPlugins/group/cilium (5.00s)

                                                
                                    
Copied to clipboard